Search Results

Search found 9563 results on 383 pages for 'insertion sort'.

Page 69/383 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Matching users based on a series of questions

    - by SeanWM
    I'm trying to figure out a way to match users based on specific personality traits. Each trait will have its own category. I figure in my user table I'll add a column for each category: id name cat1 cat2 cat3 1 Sean ? ? ? 2 Other ? ? ? Let's say I ask each user 3 questions in each category. For each question, you can answer one of the following: No, Maybe, Yes How would I calculate one number based off the answers in those 3 questions that would hold a value I can compare other users to? I was thinking having some sort of weight. Like: No -> 0 Maybe -> 1 Yes -> 2 Then doing some sort of meaningful calculation. I want to end up with something like this so I can query the users and find who matches close: id name cat1 cat2 cat3 1 Sean 4 5 1 2 Other 1 2 5 In the situation above, the users don't really match. I'd want to match with someone with a +1 or -1 of my score in each category. I'm not a math guy so I'm just looking for some ideas to get me started.

    Read the article

  • how to use a batch file to delete a line of text in a bunch of text files? [on hold]

    - by wbt
    I have a bunch of txt files in my D drive which are placed randomly in different locations.Some files also contain symbols.I want a batch file so that i can delete their specific lines completely at the same time without doing it one by one for each file and please refer to a code which does not create a new text file at some other location with the changes being incorporated i.e I do not want the input.txt and output.txt thing.I just need the original files to be replaced with the changes as soon as i click the batch file. e.g D:\abc\1.txt D:\xyz\2.txt etc I want both of their 3rd lines erased completely with a single click and the new file must be saved with the same name in the same location i.e the new changed text files must replace the old text files with their respective lines removed.Maybe some sort of *.txt thing i.e i should be able to change all the files with the .txt extensions in a drive via a single batch file perhaps in another drive,not placing my batch file into each and every folder separately and then running them.Alternatively a vbs file is also welcomed. SORRY FOR THE LONG AND THOROUGH MESSAGE BUT I'M WONDERING ALL OVER THE INTERNET FOR THE LAST TWO DAYS JUST FOR THIS ONE BATCH FILE.ALL THE INFORMATION I GET IS A SORT OF JARGON FOR ME AS I AM NOT A GEEK WITH THE SCRIPTING.PLEASE DESCRIBE THE CODE TOO.YOUR HELP IS MUCH APPRECIATED

    Read the article

  • Android onClickListener options and help on creating a non-static array adapter

    - by CoderInTraining
    I am trying to make an application that gets data dynamically and displays it in a listView. Here is my code that I have with static data. There are a couple of things I want to try and do and can't figure out how. MainActivity.java package text.example.project; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import android.app.ListActivity; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.ArrayAdapter; import android.widget.ListView; import android.widget.TextView; import android.widget.Toast; public class MainActivity extends ListActivity { //declarations private boolean isItem; private ArrayAdapter<String> item1Adapter; private ArrayAdapter<String> item2Adapter; private ArrayAdapter<String> item3Adapter; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Collections.sort(ITEM1); Collections.sort(ITEM2); Collections.sort(ITEM3); item1Adapter = new ArrayAdapter<String>(this, R.layout.list_item, ITEM1); item2Adapter = new ArrayAdapter<String>(this, R.layout.list_item, ITEM2); item3Adapter = new ArrayAdapter<String>(this, R.layout.list_item, ITEM3); setListAdapter(item1Adapter); isItem = true; ListView lv = getListView(); lv.setTextFilterEnabled(true); lv.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { // When clicked, show a toast with the TextView text if (isItem) { //ITEM1.add("another\n\t" + Calendar.getInstance().getTime()); Collections.sort(ITEM1); item2Adapter.notifyDataSetChanged(); setListAdapter(item2Adapter); isItem = false; } else { item1Adapter.notifyDataSetChanged(); setListAdapter(item1Adapter); isItem = true; } Toast.makeText(getApplicationContext(), ((TextView) view).getText(), Toast.LENGTH_SHORT).show(); } }); } // need to turn dynamic static ArrayList<String> ITEM1 = new ArrayList<String> (Arrays.asList( "ITEM1-1", "ITEM1-2" )); static ArrayList<String> ITEM2 = new ArrayList<String> (Arrays.asList( "ITEM2-1", "ITEM2-2" )); static ArrayList<String> ITEM3 = new ArrayList<String> (Arrays.asList("ITEM3-1", "ITEM3-2")); } list_item.xml <?xml version="1.0" encoding="utf-8"?> <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="10dp" android:textSize="16sp" > </TextView> What I want to do is first pull from a dynamic source. I need to do what is almost exactly like this tutorial... http://androiddevelopement.blogspot.in/2011/06/android-xml-parsing-tutorial-using.html ... however, I can't get the tutorial to work... I also would like to know how I can make the list item clicks so that if I click on "ITEM1-1" it goes to the menu "ITEM2-1" and "ITEM2-2". and if "ITEM1-2" is clicked, then it goes to the menu with "ITEM3-1" and "ITEM3-2"... I am totally a noob at this whole android development thing. So any suggestions would be great!

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Logical Domain Modeling Made Simple

    - by Knut Vatsendvik
    How can logical domain modeling be made simple and collaborative? Many non-technical end-users, managers and business domain experts find it difficult to understand the visual models offered by many UML tools. This creates trouble in capturing and verifying the information that goes into a logical domain model. The tools are also too advanced and complex for a non-technical user to learn and use. We have therefore, in our current project, ended up with using Confluence as tool for designing the logical domain model with the help of a few very useful plugins. Big thanks to Ole Nymoen and Per Spilling for their expertise in this field that made this posting possible. Confluence Plugins Here is a list of Confluence plugins used in this solution. Install these before trying out the macros used below. Plugin Description Copy Space Allows a space administrator to copy a space, including the pages within the space Metadata Supports adding metadata to Wiki pages Label Manages labeling of pages Linking Contains macros for linking to templates, the dashboard and other Table Enhances the table capability in Confluence Creating a Confluence Space First we need to create a new confluence space for the domain model. Click the link Create a Space located below the list of spaces on the Dashboard. Please contact your Confluence administrator is you do not have permissions to do this.   For illustrative purpose all attributes and entities in this posting are based on my imaginary project manager domain model. When a logical domain model is good enough for being implemented, do a copy of the Confluence Space (see Copy Space plugin). In this way you create a stable version of the logical domain model while further design can continue with the new copied space. Typical will the implementation phase result in a database design and/or a XSD schema design. Add Space Templates Go to the Home page of your Confluence Space. Navigate to the Browse drop-down menu and click on Advanced. Then click the Templates option in the left navigation panel. Click Add New Space Template to add the following three templates. Name: attribute {metadata-list} || Name | | || Type | | || Format | | || Description | | {metadata-list} {add-label:attribute} Name: primary-type {metadata-list} || Name | || || Type | || || Format | || || Description | || {metadata-list} {add-label:primary-type} Name: complex-type {metadata-list} || Name | || || Description |  || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | {add-label:complex-type,entity} The metadata-list macro (see Metadata plugin) will save a list of metadata values to the page. The add-label macro (see Label plugin) will automatically label the page. Primary Types Page Our first page to add will act as container for our primary types. Switch to Wiki markup when adding the following content to the page. | (+) {add-page:template=primary-type|parent=@self}Add new primary type{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|root=@self|pages=@descendents} Once the page is created, click the Add new primary type (create-page macro) to start creating a new pages. Here is an example of input to the LocalDate page. Embrace the LocalDate with square brackets [] to make the page linkable. Again switch to Wiki markup before editing. {metadata-list} || Name | [LocalDate] || || Type | Date || || Format | YYYY-MM-DD || || Description | Date in local time zone. YYYY = year, MM = month and DD = day || {metadata-list} {add-label:primary-type} The metadata-report macro will show a tabular report of all child pages.   Attributes Page The next page will act as container for all of our attributes. | (+) {add-page:template=attribute|parent=@self|title=attribute}Add new attribute{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|pages=@descendants} Here is an example of input to the startDate page. {metadata-list} || Name | [startDate] || || Type | [LocalDate] || || Format | {metadata-from:LocalDate|Format} || || Description | The projects start date || {metadata-list} {add-label:attribute} Using the metadata-from macro we fetch the text from the previously created LocalDate page. Complex Types Page The last page in this example shows how attributes can be combined together to form more complex types.   h3. Intro Overview of complex types in the domain model. | (+) {add-page:template=complex-type|parent=@self}Add a new complex type{add-page}\\ | {metadata-report:Name,Description|sort=Name|root=@self|pages=@descendents} Here is an example of input to the ProjectType page. {metadata-list} || Name | [ProjectType] || || Description | Represents a project || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [projectId] | {metadata-from:projectId|Type} | {metadata-from:projectId|Format} | {metadata-from:projectId|Description} | | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | | [description] | {metadata-from:description|Type} | {metadata-from:description|Format} | {metadata-from:description|Description} | | [startDate] | {metadata-from:startDate|Type} | {metadata-from:startDate|Format} | {metadata-from:startDate|Description} | {add-label:complex-type,entity} Gives us this Conclusion Using a web-based corporate Wiki like Confluence to create a logical domain model increases the collaboration between people with different roles in the enterprise. It’s my believe that this helps the domain model to be more accurate, and better documented. In our real project we have more pages than illustrated here to complete the documentation. We do also still use UML tools to create different types of diagrams that Confluence do not support. As a last tip, an ImageMap plugin can make those diagrams clickable when used in pages. Enjoy!

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • k-d tree implementation [closed]

    - by user466441
    when i run my code and debugged,i got this error - this 0x00093584 {_Myproxy=0x00000000 _Mynextiter=0x00000000 } std::_Iterator_base12 * const - _Myproxy 0x00000000 {_Mycont=??? _Myfirstiter=??? } std::_Container_proxy * _Mycont CXX0017: Error: symbol "" not found _Myfirstiter CXX0030: Error: expression cannot be evaluated + _Mynextiter 0x00000000 {_Myproxy=??? _Mynextiter=??? } std::_Iterator_base12 * but i dont know what does it means,code is this #include<iostream> #include<vector> #include<algorithm> using namespace std; struct point { float x,y; }; vector<point>pointleft(4); vector<point>pointright(4); //we are going to implement two comparison function for x and y coordinates,we need it in calculation of median (we should sort vector //by x or y according to depth informaton,is depth even or odd. bool sortby_X(point &a,point &b) { return a.x<b.x; } bool sortby_Y(point &a,point &b) { return a.y<b.y; } //so i am going to implement to median finding algorithm,one for finding median by x and another find median by y point medianx(vector<point>points) { point temp; sort(points.begin(),points.end(),sortby_X); temp=points[(points.size()/2)]; return temp; } point mediany(vector<point>points) { point temp; sort(points.begin(),points.end(),sortby_Y); temp=points[(points.size()/2)]; return temp; } //now construct basic tree structure struct Tree { float x,y; Tree(point a) { x=a.x; y=a.y; } Tree *left; Tree *right; }; Tree * build_kd( Tree *root,vector<point>points,int depth) { point temp; if(points.size()==1)// that point is as a leaf { if(root==NULL) root=new Tree(points[0]); return root; } if(depth%2==0) { temp=medianx(points); root=new Tree(temp); for(int i=0;i<points.size();i++) { if (points[i].x<temp.x) pointleft[i]=points[i]; else pointright[i]=points[i]; } } else { temp=mediany(points); root=new Tree(temp); for(int i=0;i<points.size();i++) { if(points[i].y<temp.y) pointleft[i]=points[i]; else pointright[i]=points[i]; } } return build_kd(root->left,pointleft,depth+1); return build_kd(root->right,pointright,depth+1); } void print(Tree *root) { while(root!=NULL) { cout<<root->x<<" " <<root->y; print(root->left); print(root->right); } } int main() { int depth=0; Tree *root=NULL; vector<point>points(4); float x,y; int n=4; for(int i=0;i<n;i++) { cin>>x>>y; points[i].x=x; points[i].y=y; } root=build_kd(root,points,depth); print(root); return 0; } i am trying ti implement in c++ this pseudo code tuple function build_kd_tree(int depth, set points): if points contains only one point: return that point as a leaf. if depth is even: Calculate the median x-value. Create a set of points (pointsLeft) that have x-values less than the median. Create a set of points (pointsRight) that have x-values greater than or equal to the median. else: Calculate the median y-value. Create a set of points (pointsLeft) that have y-values less than the median. Create a set of points (pointsRight) that have y-values greater than or equal to the median. treeLeft = build_kd_tree(depth + 1, pointsLeft) treeRight = build_kd_tree(depth + 1, pointsRight) return(median, treeLeft, treeRight) please help me what this error means?

    Read the article

  • spliiting code in java-don't know what's wrong [closed]

    - by ???? ?????
    I'm writing a code to split a file into many files with a size specified in the code, and then it will join these parts later. The problem is with the joining code, it doesn't work and I can't figure what is wrong! This is my code: import java.io.*; import java.util.*; public class StupidSplit { static final int Chunk_Size = 10; static int size =0; public static void main(String[] args) throws IOException { String file = "b.txt"; int chunks = DivideFile(file); System.out.print((new File(file)).delete()); System.out.print(JoinFile(file, chunks)); } static boolean JoinFile(String fname, int nChunks) { /* * Joins the chunks together. Chunks have been divided using DivideFile * function so the last part of filename will ".partxxxx" Checks if all * parts are together by matching number of chunks found against * "nChunks", then joins the file otherwise throws an error. */ boolean successful = false; File currentDirectory = new File(System.getProperty("user.dir")); // File[] fileList = currentDirectory.listFiles(); /* populate only the files having extension like "partxxxx" */ List<File> lst = new ArrayList<File>(); // Arrays.sort(fileList); for (File file : fileList) { if (file.isFile()) { String fnm = file.getName(); int lastDot = fnm.lastIndexOf('.'); // add to list which match the name given by "fname" and have //"partxxxx" as extension" if (fnm.substring(0, lastDot).equalsIgnoreCase(fname) && (fnm.substring(lastDot + 1)).substring(0, 4).equals("part")) { lst.add(file); } } } /* * sort the list - it will be sorted by extension only because we have * ensured that list only contains those files that have "fname" and * "part" */ File[] files = (File[]) lst.toArray(new File[0]); Arrays.sort(files); System.out.println("size ="+files.length); System.out.println("hello"); /* Ensure that number of chunks match the length of array */ if (files.length == nChunks-1) { File ofile = new File(fname); FileOutputStream fos; FileInputStream fis; byte[] fileBytes; int bytesRead = 0; try { fos = new FileOutputStream(ofile,true); for (File file : files) { fis = new FileInputStream(file); fileBytes = new byte[(int) file.length()]; bytesRead = fis.read(fileBytes, 0, (int) file.length()); assert(bytesRead == fileBytes.length); assert(bytesRead == (int) file.length()); fos.write(fileBytes); fos.flush(); fileBytes = null; fis.close(); fis = null; } fos.close(); fos = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find file"); successful = false; return successful; } catch (IOException ioe) { System.out.println("Cannot write to disk"); successful = false; return successful; } /* ensure size of file matches the size given by server */ successful = (ofile.length() == StupidSplit.size) ? true : false; } else { successful = false; } return successful; } static int DivideFile(String fname) { File ifile = new File(fname); FileInputStream fis; String newName; FileOutputStream chunk; //int fileSize = (int) ifile.length(); double fileSize = (double) ifile.length(); //int nChunks = 0, read = 0, readLength = Chunk_Size; int nChunks = 0, read = 0, readLength = Chunk_Size; byte[] byteChunk; try { fis = new FileInputStream(ifile); StupidSplit.size = (int)ifile.length(); while (fileSize > 0) { if (fileSize <= Chunk_Size) { readLength = (int) fileSize; } byteChunk = new byte[readLength]; read = fis.read(byteChunk, 0, readLength); fileSize -= read; assert(read==byteChunk.length); nChunks++; //newName = fname + ".part" + Integer.toString(nChunks - 1); newName = String.format("%s.part%09d", fname, nChunks - 1); chunk = new FileOutputStream(new File(newName)); chunk.write(byteChunk); chunk.flush(); chunk.close(); byteChunk = null; chunk = null; } fis.close(); System.out.println(nChunks); // fis = null; } catch (FileNotFoundException fnfe) { System.out.println("Could not find the given file"); System.exit(-1); } catch (IOException ioe) { System.out .println("Error while creating file chunks. Exiting program"); System.exit(-1); }System.out.println(nChunks); return nChunks; } } }

    Read the article

  • Suggest me solution to track the change in test DB and replicate in Another DB

    - by Pranav
    Suggest me solution to track the change in test DB and replicate in Another DB... My Client need a script or any solution, if he has two Database, One Test DB in which he tests his data on test portal and if he find it appropriate he can use those changes to be done in main DB to display on Live site.. Fior this he needs the solution to record or track all updation/deletion/insertion, so that he can do the same in main DB if found appropriate, NOTE: we have only one server, no separate server, hence binary log replication doesnt seems to be working for my case..

    Read the article

  • Mac OS X CD ripping speed

    - by SlimSCSI
    I am using vobcopy (installed via macports) to rip DVDs on a mac. I have been doing this for a while on linux with no problems. On the mac however, it is VERY slow. I am guessing that somehow the DVD drive is being limited to 1x in order to keep noise and power consumption down during playback. Is there a way to over ride this? Update: It is MUCH slower than 1x. It has taken me about an hour to copy 300MB Notes: While I appreciate all suggestions, I am not looking for "Have you tried HandBrake?". I am looking for a solution to copy the contents of a DVD, not transcode them. Also, I am launching vobcopy from an apple script that gets executed on DVD insertion, so a GUI solution is not desirable.

    Read the article

  • How can I do an "insert" statement from MS SQL -> MYSQL using linked servers

    - by bvandrunen
    Is it possible to do an "insert" statement through linked servers. I know it is possible by using MSDTC...but does this work between MS SQL and MYSQL? Any help would be greatly appreciated. As of right now...I can update and select between the 2 databases but it gives me an error when I try to run an insert statement. OLE DB provider "MSDASQL" for linked server "******" returned message "Query-based insertion or updating of BLOB values is not supported.". Msg 7343, Level 16, State 2, Line 1 The OLE DB provider "MSDASQL" for linked server "******" could not INSERT INTO table "[*********]...[******_options]". Location: memilb.cpp:1493 Expression: (*ppilb)-m_cRef == 0 SPID: 76 Process ID: 1644

    Read the article

  • Easily tell if focused on VirtualBox window?

    - by Benjamin Oakes
    Is there a way to make it very obvious that my VirtualBox window isn't in focus? The problem I'm having is switching between workspaces on Ubuntu or OS X and then trying to type in my Windows virtual machine, only to find that it's not in focus. The Windows window looks like it's in focus (based on the title bar), but I'm actually typing in Firefox on the host machine, for example. It's even worse because the Windows text insertion cursor is blinking to show focus. Ideally, I'd like the VM's display to get unsaturated (e.g. a "partial" grayscale) when not in focus, just to prevent this keyboard-focus problem. Other options would be fine too, as long as I don't have to second guess where my focus is. I'm not using the seamless mode -- the display is all within a window.

    Read the article

  • Monitor a log file on Linux and send each line to another program

    - by mlambie
    I run an apt-cacher-ng server on Ubuntu Linux which writes logs in the following format: 1299745593|O|149406|XXX.XXX.XXX.XXX|uburep/pool/main/t/tiff/libtiff4_3.9.2-2ubuntu0.4_amd64.deb 1299745593|O|10154976|XXX.XXX.XXX.XXX|uburep/pool/main/l/linux-firmware/linux-firmware_1.34.4_all.deb 1299748529|O|39368|XXX.XXX.XXX.XXX|uburep/pool/main/n/nagios-nrpe/nagios-nrpe-server_2.12-4ubuntu1_amd64.deb 1300155440|O|680100|XXX.XXX.XXX.XXX|uburep/pool/main/t/tzdata/tzdata_2011c-0ubuntu0.10.04_all.deb It shows the timestamp, direction (in or out), byte count, IP and filename. Every time a line is written to it, I'd like to also send that line to another program. I will have this program insert the line into a database so that I can crunch some statistics about how much bandwidth we're saving through operating a caching server. I do not want to cat the log file every X minutes (via cron) looking for new entries as it'd be somewhat computationally uneconomical. Instead I'd prefer to have a daemon monitor the log, and when a change is detected, each line is sent to my database-insertion script. Will swatch achieve this, or are there better options?

    Read the article

  • USB Device With Embedded Fileserver

    - by Richard Martinez
    I'm attempting to access logs from a proprietary hardware box with no reasonable hope of modifying the software. There is a process on the device to dump log files to a flash drive on the USB port after entering a code sequence. Currently, analysis of the logs requires the following: Physical presence at the device Manual entry of the code sequence Removal of USB device Insertion of USB device into a normal Linux box I'm hoping there is some sort of device that can act as a USB mass storage device but simultaneously make it's contents available as a network file share (wired preferred). Does such a device currently exist? A combo hardware/software solution would also work.

    Read the article

  • If I drop my clustered PK and add a new one, what order will my rows be in?

    - by stack
    In SQL Server, I'm looking at TableA, which currently has a uniqueidentifier clustered primary key. The GUID has no meaning in any context. (I'll give you a second to clean up your keyboard and monitor and set down the soda.) I'd like to drop that primary key and add a new unique integer primary key to the table. My question is this: when I drop the index, modify the column from uniqueidentifier to int, and add the new clustered unique primary key to the modified column, will the new PK values be in the order of insertion into the table, or will they be in some other order? Is this the right way to go here? Will this work? (I'm kind of a noobkin with regard to table creation/modification.)

    Read the article

  • Source File not updating Destination Files in Excel

    - by user127105
    I have one source file that holds all my input costs. I then have 30 to 40 destination files (costing sheets) that use links to data in this source file for their various formulae. I was sure when I started this system that any changes I made to the source file, including the insertion of new rows and columns was updated automatically by the destination files, such that the formula always pulled the correct input costs. Now all of a sudden if my destination files are closed and I change the structure of the source file by adding rows - the destination files go haywire? They pick up changes to their linked cells, but don't pick up changes to the source sheet that have shifted their relative positions in the sheet. Do I really need to open all 40 destination files at the same time I alter the source file structure? Further info: all the destination files are protected, and I am working on DropBox.

    Read the article

  • In Excel how can I sum all the numbers above the current cell?

    - by Mark Meuer
    I want to have a column in Excel that consists of a header, a bunch of numbers, and then have the sum of those numbers at the bottom. I'd like the sum to adapt to the insertion of new numbers above the total. Something like this: Numbers 1 2 5 10 18 Total If I later insert 10 new numbers in the middle of the list, I want the sum to automatically include them. I know the SUM() function can sum a whole column, but if the total is also in that column then it complains about a circular reference. How can I just sum the numbers above the total?

    Read the article

  • SQL SERVER – Enable Identity Insert – Import Expert Wizard

    - by pinaldave
    I recently got email from old friend who told me that when he tries to execute SSIS package it fails with some identity error. After some debugging and opening his package we figure out that he has following issue. Let us see what kind of set up he had on his package. Source Table with Identity column Destination Table with Identity column Following checkbox was disabled in Import Expert Wizard (as per the image below) What did we do is we enabled the checkbox described as above and we fixed the problem he was having due to insertion in identity column. The reason he was facing this error because his destination table had IDENTITY property which will not allow any  insert from user. This value is automatically generated by system when new values are inserted in the table. However, when user manually tries to insert value in the table, it stops them and throws an error. As we enabled the checkbox “Enable Identity Insert”, this feature allowed the values to be insert in the identity field and this way from source database exact identity values were moved to destination table. Let me know if this blog post was easy to understand. Reference: Pinal Dave (http://blog.SQLAuthority.com), Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Box Selection and Multi-Line Editing with VS 2010

    - by ScottGu
    This is the twenty-second in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. I’ve already covered some of the code editor improvements in the VS 2010 release.  In particular, I’ve blogged about the Code Intellisense Improvements, new Code Searching and Navigating Features, HTML, ASP.NET and JavaScript Snippet Support, and improved JavaScript Intellisense.  Today’s blog post covers a small, but nice, editor improvement with VS 2010 – the ability to use “Box Selection” when performing multi-line editing.  This can eliminate keystrokes and enables some slick editing scenarios. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Box Selection Box selection is a feature that has been in Visual Studio for awhile (although not many people knew about it).  It allows you to select a rectangular region of text within the code editor by holding down the Alt key while selecting the text region with the mouse.  With VS 2008 you could then copy or delete the selected text. VS 2010 now enables several more capabilities with box selection including: Text Insertion: Typing with box selection now allows you to insert new text into every selected line Paste/Replace: You can now paste the contents of one box selection into another and have the content flow correctly Zero-Length Boxes: You can now make a vertical selection zero characters wide to create a multi-line insert point for new or copied text These capabilities can be very useful in a variety of scenarios.  Some example scenarios: change access modifiers (private->public), adding comments to multiple lines, setting fields, or grouping multiple statements together. Great 3 Minute Box-Selection Video Demo Brittany Behrens from the Visual Studio Editor Team has an excellent 3 minute video that shows off a few cool VS 2010 multi-line code editing scenarios with box selection:   Watch it to learn a few ways you can use this new box selection capability to optimize your typing in VS 2010 even further: Hope this helps, Scott P.S. You can learn more about the VS Editor by following the Visual Studio Team Blog or by following @VSEditor on Twitter.

    Read the article

  • How to populate a core data store programmatically?

    - by jdmuys
    I have ran out of hairs to pull with a crash in this routine that populates a core data store from a 9000+ line plist file. The crash happened at the very end of the routine inside the call to [managedObjectContext save:&error]. While if I save after every object insertion, the crash doesn't happen. Of course, saving after every object insertion totally kills the performance (from less than a second to many minutes). I modified my code so that it saves every K insertions, and the crash happens as soon as K = 2. The crash is an out-of-bound exception for an NSArray: Serious application error. Exception was caught during Core Data change processing: *** -[NSCFArray objectAtIndex:]: index (1) beyond bounds (1) with userInfo (null) Also maybe relevant, when the exception happen, my fetch result controller controllerDidChangeContent: delegate routine is in the call stack. It simply calls my table view endUpdate routine. I am now running out of ideas. How am I supposed to populate a core data store with a table view? Here is the call stack: #0 0x901ca4e6 in objc_exception_throw #1 0x01d86c3b in +[NSException raise:format:arguments:] #2 0x01d86b9a in +[NSException raise:format:] #3 0x00072cb9 in _NSArrayRaiseBoundException #4 0x00010217 in -[NSCFArray objectAtIndex:] #5 0x002eaaa7 in -[UITableView(_UITableViewPrivate) _endCellAnimationsWithContext:] #6 0x002def02 in -[UITableView endUpdates] #7 0x00004863 in -[AirportViewController controllerDidChangeContent:] at AirportViewController.m:463 #8 0x01c43be1 in -[NSFetchedResultsController(PrivateMethods) _managedObjectContextDidChange:] #9 0x0001462a in _nsnote_callback #10 0x01d31005 in _CFXNotificationPostNotification #11 0x00011ee0 in -[NSNotificationCenter postNotificationName:object:userInfo:] #12 0x01ba417d in -[NSManagedObjectContext(_NSInternalNotificationHandling) _postObjectsDidChangeNotificationWithUserInfo:] #13 0x01c03763 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _createAndPostChangeNotification:withDeletions:withUpdates:withRefreshes:] #14 0x01b885ea in -[NSManagedObjectContext(_NSInternalChangeProcessing) _processRecentChanges:] #15 0x01bbe728 in -[NSManagedObjectContext save:] #16 0x000039ea in -[AirportViewController populateAirports] at AirportViewController.m:112 Here is the code to the routine. I apologize because a number of lines are probably irrelevant, but I'd rather err on that side. The crash happens the very first time it calls [managedObjectContext save:&error]: - (void) populateAirports { NSBundle *meBundle = [NSBundle mainBundle]; NSString *dbPath = [meBundle pathForResource:@"DuckAirportsBin" ofType:@"plist"]; NSArray *initialAirports = [[NSArray alloc] initWithContentsOfFile:dbPath]; //********************************************************************************* // get existing countries NSMutableDictionary *countries = [[NSMutableDictionary alloc] initWithCapacity:200]; NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Country" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSError *error = nil; NSArray *values = [managedObjectContext executeFetchRequest:fetchRequest error:&error]; if (!values) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } int numCountries = [values count]; NSLog(@"We have %d countries in store", numCountries); for (Country *aCountry in values) { [countries setObject:aCountry forKey:aCountry.code]; } [fetchRequest release]; //********************************************************************************* // read airports int numAirports = 0; int numUnsavedAirports = 0; #define MAX_UNSAVED_AIRPORTS_BEFORE_SAVE 2 numCountries = 0; for (NSDictionary *anAirport in initialAirports) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSString *countryCode = [anAirport objectForKey:@"country"]; Country *thatCountry = [countries objectForKey:countryCode]; if (!thatCountry) { thatCountry = [NSEntityDescription insertNewObjectForEntityForName:@"Country" inManagedObjectContext:managedObjectContext]; thatCountry.code = countryCode; thatCountry.name = [anAirport objectForKey:@"country_name"]; thatCountry.population = 0; [countries setObject:thatCountry forKey:countryCode]; numCountries++; NSLog(@"Found %dth country %@=%@", numCountries, countryCode, thatCountry.name); } // now that we have the country, we create the airport Airport *newAirport = [NSEntityDescription insertNewObjectForEntityForName:@"Airport" inManagedObjectContext:managedObjectContext]; newAirport.city = [anAirport objectForKey:@"city"]; newAirport.code = [anAirport objectForKey:@"code"]; newAirport.name = [anAirport objectForKey:@"name"]; newAirport.country_name = [anAirport objectForKey:@"country_name"]; newAirport.latitude = [NSNumber numberWithDouble:[[anAirport objectForKey:@"latitude"] doubleValue]]; newAirport.longitude = [NSNumber numberWithDouble:[[anAirport objectForKey:@"longitude"] doubleValue]]; newAirport.altitude = [NSNumber numberWithDouble:[[anAirport objectForKey:@"altitude"] doubleValue]]; newAirport.country = thatCountry; // [thatCountry addAirportsObject:newAirport]; numAirports++; numUnsavedAirports++; if (numUnsavedAirports >= MAX_UNSAVED_AIRPORTS_BEFORE_SAVE) { if (![managedObjectContext save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } numUnsavedAirports = 0; } [pool release]; }

    Read the article

  • Have you changed your coding style recently? It wasn't hard wasn't it?

    - by Ernelli
    I've used to write code in C-like languages using the Allman style, regarding the position of braces. void foo(int bar) { if(bar) { //... } else return; //... } Now the last two years I have been working mostly in JavaScript and when we adopted jslint as part of our QA process, I had to adopt to the Crockford way of doing things. So I had to change the coding style into: function foo(bar) { if (bar) { //... } else { return; } //... } Now apart from comparing a C/C++ example with JavaScript, I must say that my JavaScript-Crockford-coding style now has spread into my C/C++/Java coding when I revise old projects and work on code in those languages that for example has no problem with single line statements or ambiguous newline insertion. I used to consider the later format very awkward, I have never had any problems with adapting my coding style to the one chosen by my predecessors, except for when I was a Junior developer mostly being the solve developer on legacy projects and the first thing I did was to change the indenting style. But now after a couple of months I consider the Allman style a little bit too spacious and feel more comfortable with the K&R-like style. Have you changed your coding style during your career?

    Read the article

  • Annotate source code with diagrams as comments

    - by Steven Lu
    I write a lot of (primarily c++ and javascript) code that touches upon computational geometry and graphics and those kinds of topics, so I have found that visual diagrams have been an indispensable part of the process of solving problems. I have determined just now that "oh, wouldn't it just be fantastic if I could somehow attach a hand-drawn diagram to a piece of code as a comment", and this would allow me to come back to something I worked on, days, weeks, months earlier and far more quickly re-grok my algorithms. As a visual learner, I feel like this has the potential to improve my productivity with almost every type of programming because simple diagrams can help with understanding and reasoning about any type of non-trivial data structure. Graphs for example. During graph theory class at university I had only ever been able to truly comprehend the graph relationships that I could actually draw diagrammatical representations of. So... No IDE to my knowledge lets you save a picture as a comment to code. My thinking was that I or someone else could come up with some reasonably easy-to-use tool that can convert an image into a base64 binary string which I can then insert into my code. If the conversion/insertion process can be streamlined enough it would allow a far better connection between the diagram and the actual code, so I no longer need to chronographically search through my notebooks. Even more awesome: plugins for the IDEs to automatically parse out and display the image. There is absolutely nothing difficult about this from a theoretical point of view. My guess is that it would take some extra time for me to actually figure out how to extend my favorite IDEs and maintain these plugins, so I'd be totally happy with a sort of code post-processor which would do the same parsing out and rendering of the images and show them side by side with the code, inside of a browser or something. Since I'm a javascript programmer by trade. What do people think? Would anyone pay for this? I would.

    Read the article

  • Let's introduce the Oracle Enterprise Data Quality family!

    - by Sarah Zanchetti
    The Oracle Enterprise Data Quality family of products helps you to achieve maximum value from their business applications by delivering fit-­for-­purpose data. OEDQ is a state-of-the-art collaborative data quality profiling, analysis, parsing, standardization, matching and merging product, designed to help you understand, improve, protect and govern the quality of the information your business uses, all from a single integrated environment. Oracle Enterprise Data Quality products are: Oracle Enterprise Data Quality Profile and Audit Oracle Enterprise Data Quality Parsing and Standardization Oracle Enterprise Data Quality Match and Merge Oracle Enterprise Data Quality Address Verification Server Oracle Enterprise Data Quality Product Data Parsing and Standardization Oracle Enterprise Data Quality Product Data Match and Merge Also, the following are some of the key features of OEDQ: Integrated data profiling, auditing, cleansing and matching Browser-based client access Ability to handle all types of data – for example customer, product, asset, financial, operational Connection to any JDBC-compliant data sources and targets Multi-user project support (role-based access, issue tracking, process annotation, and version control) Services Oriented Architecture (SOA) - support for designing processes that may be exposed to external applications as a service Designed to process large data volumes A single repository to hold data along with gathered statistics and project tracking information, with shared access Intuitive graphical user interface designed to help you solve real-world information quality issues quickly Easy, data-led creation and extension of validation and transformation rules Fully extensible architecture allowing the insertion of any required custom processing  If you need to learn more about EDQ, or get assistance for any kind of issue, the Oracle Technology Network offers a huge range of resources on Oracle software. Discuss technical problems and solutions on the Discussion Forums. Get hands-on step-by-step tutorials with Oracle By Example. Download Sample Code. Get the latest news and information on any Oracle product. You can also get further help and information with Oracle software from: My Oracle Support Oracle Support Services An Information Center is available, where you can find technical information and fast solutions to the most common already solved issues: Information Center: Oracle Enterprise Data Quality [ID 1555073.2]

    Read the article

  • Be liberal in what you accept... or not?

    - by Matthieu M.
    [Disclaimer: this question is subjective, but I would prefer getting answers backed by facts and/or reflexions] I think everyone knows about the Robustness Principle, usually summed up by Postel's Law: Be conservative in what you send; be liberal in what you accept. I would agree that for the design of a widespread communication protocol this may make sense (with the goal of allowing easy extension), however I have always thought that its application to HTML / CSS was a total failure, each browser implementing its own silent tweak detection / behavior, making it near impossible to obtain a consistent rendering across multiple browsers. I do notice though that there the RFC of the TCP protocol deems "Silent Failure" acceptable unless otherwise specified... which is an interesting behavior, to say the least. There are other examples of the application of this principle throughout the software trade that regularly pop up because they have bitten developpers, from the top off my head: Javascript semi-colon insertion C (silent) builtin conversions (which would not be so bad if it did not truncated...) and there are tools to help implement "smart" behavior: name matching phonetic algorithms (Double Metaphone) string distances algorithms (Levenshtein distance) However I find that this approach, while it may be helpful when dealing with non-technical users or to help users in the process of error recovery, has some drawbacks when applied to the design of library/classes interface: it is somewhat subjective whether the algorithm guesses "right", and thus it may go against the Principle of Least Astonishment it makes the implementation more difficult, thus more chances to introduce bugs (violation of YAGNI ?) it makes the behavior more susceptible to change, as any modification of the "guess" routine may break old programs, nearly excluding refactoring possibilities... from the start! And this is what led me to the following question: When designing an interface (library, class, message), do you lean toward the robustness principle or not ? I myself tend to be quite strict, using extensive input validation on my interfaces, and I was wondering if I was perhaps too strict.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >