Search Results

Search found 5175 results on 207 pages for 'red soft adair stefanwoe'.

Page 13/207 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Podcast Show Notes: The Red Room Interview &ndash; Part 2

    - by Bob Rhubart
    Room bloggers Sean Boiling, Richard Ward, and Mervin Chaing bring their in-the-trenches perspective to the conversation once again in this week’s edition of the OTN ArchBeat Podcast. Listen. (Missed last week? No problemo: Listen to Part 1) In this segment the conversation turns to SOA governance and balancing the need for reuse against the need for speed.  It’s no mystery that many people react to the term “SOA Governance” in much the same way as they would to the sound of Darth Vader’s respirator. But Mervin explains how a simple change in terminology can go a long way toward lowering blood pressure. Those interested in connecting with Sean, Richard, or Mervin can do so via the links listed below: Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog And you’ll find the complete list of the Red Room SOA Best Practice Posts in last week’s show notes. The third and final segment of the Red Room series runs next week.  I have enough material from the original interview for a fourth program,  but it’ll have to wait. Also, as mentioned last week, the podcast name change is now complete, from Arch2Arch, to ArchBeat. As WPBH-TV9 weatherman Phil Connors says, “Anything different is good.”   Technorati Tags: archbeat,podcast. arch2arch,soa,soa governance,oracle,otn Flickr Tags: archbeat,podcast. arch2arch,soa,soa governance,oracle,otn

    Read the article

  • Podcast Show Notes: Red Room Interview &ndash; Part 3: Ninja BPM

    - by Bob Rhubart
    The third and final segment of my conversation with Red Room bloggers Sean Boiling, Richard Ward, and Mervin Chaing is now available. Listen to Part 1 Listen to Part 2 Listen to Part 3 As you’ll hear, this segment gets its title from another example of Mervin’s tactic for tweaking terminology to make it easier to sell stakeholders on certain SOA concepts. These are some very bright, very knowledgeable guys, so I encourage you to connect with them via the links below to pick their brains on any SOA or related issues that might have you reaching for the aspirin bottle. Sean Boiling - Sales Consulting Manager for Oracle Fusion Middleware LinkedIn | Twitter | Blog Richard Ward - SOA Channel Development Manager at Oracle LinkedIn | Blog Mervin Chiang - Consulting Principal at Leonardo Consulting LinkedIn | Twitter | Blog Once again, you’ll find the complete list of Red Room SOA Best Practice Posts in here. Up Next Next week’s program features another panel discussion recorded during a virtual min meet-up. The panel includes Oracle ACE Directors Mike van Alst (IT-Eye) and Jordan Braunstein (TUSC) along with The Definitive Guide to SOA: Oracle Service Bus author Jeff Davies. Stay tuned: RSS   Technorati Tags: oracle technology network,oracle,archbeat,podcast. arch2arch,soa,bpm del.icio.us Tags: oracle technology network,oracle,archbeat,podcast. arch2arch,soa,bpm

    Read the article

  • R and SPSS difference

    - by sfactor
    i will be analysing vast amount of network traffic related data shortly. i will pre-process the data in order to analyse it. i have found that R and SPSS are among the most popular tools for statistical analysis. i will also be generating quite a lot of graphs and charts. so i was wondering what is the basic difference between these two softwares. i am not asking which one is better. i just wanted to know what are the difference in terms of workflow between the two besides the fact that SPSS has a GUI. I will be mostly working with scripts in either case anyway so i wanted to know about the other differences.

    Read the article

  • Android: how can i tell if the soft keyboard is showing or not?

    - by binnyb
    Heres the dilemma: I am showing a screen with 3 input fields and 2 buttons inside of a tab(there are 3 tabs total, and they are on the bottom of the screen). the 2 buttons are set to the bottom left and right of the screen, right above the tabs. when i click on an input field, the tabs and buttons are all pushed up on top of the keyboard. i desire to only push the buttons up, and leave the tabs where they originally are, on the bottom. i am thinking of setting the visibility of the tabs to GONE once i determine that the soft keyboard is showing, and visibility to VISIBLE once the soft keyboard is gone. is there some kind of listener for the soft keyboard, or maybe the input field? maybe some tricky use of OnFocusChangeListener for the edit text? How can i determine whether the keyboard is visible or not?

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • FREE! SQL Scripts Manager – a Christmas gift from Red Gate

    Red Gate has released SQL Scripts Manager, a free tool containing over 25 scripts written by SQL Server experts, to help you automate common troubleshooting, diagnostic, and maintenance tasks. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Tips to Find the Best Keyword Research Software

    Keyword research plays a critical role in search engine optimization. The search engines use some keyword phrases to identify the web pages that are relevant to those words. For instance, if you are selling soft toys, then soft baby toys, buy soft toys, soft play toys are some of the right keywords that can bring good number of traffic to your site.

    Read the article

  • What's happening in Red Gate's .NET Developer Tools division?

    .NET 4.0, Silverlight 4, F# decompilation in .NET Reflector, our crazy shipping schedule, and some prize draw winners. Yes, with a list of topics that broad, it can only be another update on what's happening in Red Gate's .NET Developer Tools division....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Server 2008: If Multiple Values Set In Other Mutliple Values Set

    - by AJH
    In SQL, is there anyway to accomplish something like this? This is based off a report built in SQL Server Report Builder, where the user can specify multiple text values as a single report parameter. The query for the report grabs all of the values the user selected and stores them in a single variable. I need a way for the query to return only records that have associations to EVERY value the user specified. -- Assume there's a table of Elements with thousands of entries. -- Now we declare a list of properties for those Elements to be associated with. create table #masterTable ( ElementId int, Text varchar(10) ) insert into #masterTable (ElementId, Text) values (1, 'Red'); insert into #masterTable (ElementId, Text) values (1, 'Coarse'); insert into #masterTable (ElementId, Text) values (1, 'Dense'); insert into #masterTable (ElementId, Text) values (2, 'Red'); insert into #masterTable (ElementId, Text) values (2, 'Smooth'); insert into #masterTable (ElementId, Text) values (2, 'Hollow'); -- Element 1 is Red, Coarse, and Dense. Element 2 is Red, Smooth, and Hollow. -- The real table is actually much much larger than this; this is just an example. -- This is me trying to replicate how SQL Server Report Builder treats -- report parameters in its queries. The user selects one, some, all, -- or no properties from a list. The written query treats the user's -- selections as a single variable called @Properties. -- Example scenario 1: User only wants to see Elements that are BOTH Red and Dense. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Dense in (select Text from #masterTable where ElementId = e.Id) --ideally a set containing only Red, Coarse, and Dense --Both Red and Dense are within Element 1's properties (Red, Coarse, Dense), so Element 1 gets returned, but not Element 2. -- Example scenario 2: User only wants to see Elements that are BOTH Red and Hollow. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Hollow in (select Text from #masterTable where ElementId = e.Id) --Both Red and Hollow are within Element 2's properties (Red, Smooth, Hollow), so Element 2 gets returned, but not Element 1. --Example Scenario 3: User only picked the Red option. select e.* from Elements e where (@Properties) --ideally a set containing only Red in (select Text from #masterTable where ElementId = e.Id) --Red is within both Element 1 and Element 2's properties, so both Element 1 and Element 2 get returned. The above syntax doesn't actually work because SQL doesn't seem to allow multiple values on the left side of the "in" comparison. Error that returns: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. Am I even on the right track here? Sorry if the example looks long-winded or confusing.

    Read the article

  • Mono Project: How to install Mono framework on Red Hat Linux which is compiled on centOS ?

    - by funwithcoding
    We have Red Hat Enterprise Linux servers at work place. However we dont have Red Hat Linux desktops. So I used CentOS 5.4 to compile the Mono sources and generated the Mono framework for CentOS and tested with some sample codes and I am satisfied. I want to transfer this compiled framework to Red Hat Enterprise Linux 5. How Can I do that? Do I have to compile the Mono framework statically or do I have to copy the linked libraries as well? I am not familiar with linux much. Any help is highly appreciated.

    Read the article

  • Mono Project: How to install Mono framework on Red Hat Linux which is compiled on centOS ?

    - by funwithcoding
    We have Red Hat Enterprise Linux servers at work place. However we dont have Red Hat Linux desktops. So I used CentOS 5.4 to compile the Mono sources and generated the Mono framework for CentOS and tested with some sample codes and I am satisfied. I want to transfer this compiled framework to Red Hat Enterprise Linux 5. How Can I do that? Do I have to compile the Mono framework statically or do I have to copy the linked libraries as well? I am not familiar with linux much. Any help is highly appreciated.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • nokia cell phone not accepting IP from dnsmasq dhcp server

    - by samix
    Hello, I having problem connecting a NOkia cell phone to my home wifi network. The wifi network is provided by a wireless card in a machine running Debian Testing and 2.6.26-2-686 kernel. The cars is D-Link DWL-G520 working in ap mode and has WPA encryption enabled. The wireless network is provided by hostapd using madwifi driver. Windows and Mac machines work properly with this wifi network. When I try to get the Nokia phone to connect to the wifi network, I get these lines in my dnsmasq log (to see lines without wrapping, here is the pastebin link for convenience - http://pastebin.com/m466c8fd2): Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 IEEE 802.11: disassociated Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 IEEE 802.11: associated Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 RADIUS: starting accounting session 4AE664FA-00000036 Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 WPA: pairwise key handshake completed (WPA) Oct 27 13:25:21 red hostapd: ath0: STA 11:22:33:44:55:66 WPA: group key handshake completed (WPA) Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 Available DHCP range: 192.168.5.150 -- 192.168.5.199 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 DHCPDISCOVER(ath0) 0.0.0.0 11:22:33:44:55:66 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 DHCPOFFER(ath0) 192.168.5.21 11:22:33:44:55:66 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 requested options: 12:hostname, 6:dns-server, 15:domain-name, Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 requested options: 1:netmask, 3:router, 28:broadcast, 120:sip-server Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 tags: known, ath0 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 next server: 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 1 option: 53:message-type 02 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 54:server-identifier 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 51:lease-time 00:00:46:50 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 58:T1 00:00:23:28 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 59:T2 00:00:3d:86 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 1:netmask 255.255.255.0 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 28:broadcast 192.168.5.255 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 3:router 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 4 option: 6:dns-server 192.168.5.1 Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 8 option: 15:domain-name home.pvt Oct 27 13:25:21 red dnsmasq-dhcp[11451]: 3875439214 sent size: 3 option: 12:hostname NokiaCellPhone Anybody know the problem might be? If I switch off dnsmasq dhcp queries logging, i.e. if I decrease the verbosity of the log, all I see are two lines of DHCPDISCOVER(ath0) and DHCPOFFER(ath0) repeatedly in the log with no acceptance by the cell phone. It appears as though the phone is not accepting the dhcp offer. However, if I give the phone a static IP address in its configuration, it works properly on the wifi network. So it appears as though the problem is dhcp related. Hints? Suggestions? Installed stuff: $ dpkg -l dnsmasq hostap* | grep ^i ii dnsmasq 2.50-1 A small caching DNS proxy and DHCP/TFTP server ii dnsmasq-base 2.50-1 A small caching DNS proxy and DHCP/TFTP server ii hostapd 1:0.6.9-3 user space IEEE 802.11 AP and IEEE 802.1X/WPA/ Thanks. PS: Here is the DHCP tcp dump for more information (with mac addresses changed): $ sudo dhcpdump -i ath0 -h ^11:22:33:44:55:66 TIME: 2009-10-30 12:15:32.916 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:32.918 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:32.918 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 0 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:34.922 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:34.922 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:34.923 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 2 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:38.919 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:38.920 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:38.921 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: c3f93d53 SECS: 6 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:46.944 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:46.944 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 0.0.0.0 SIADDR: 0.0.0.0 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 1 (DHCPDISCOVER) OPTION: 50 ( 4) Request IP address 0.0.0.0 OPTION: 61 ( 7) Client-identifier 01:11:22:33:44:55:66 OPTION: 55 ( 7) Parameter Request List 12 (Host name) 6 (DNS server) 15 (Domainname) 1 (Subnet mask) 3 (Routers) 28 (Broadcast address) 120 (SIP Servers DHCP Option) OPTION: 57 ( 2) Maximum DHCP message size 576 TIME: 2009-10-30 12:15:46.945 IP: 192.168.5.1 (a:bb:cc:dd:ee:ff) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: ccafe769 SECS: 14 FLAGS: 7f80 CIADDR: 0.0.0.0 YIADDR: 192.168.5.21 SIADDR: 192.168.5.1 GIADDR: 0.0.0.0 CHADDR: 11:22:33:44:55:66:00:00:00:00:00:00:00:00:00:00 SNAME: . FNAME: . OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 192.168.5.1 OPTION: 51 ( 4) IP address leasetime 18000 (5h) OPTION: 58 ( 4) T1 9000 (2h30m) OPTION: 59 ( 4) T2 15750 (4h22m30s) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 192.168.5.255 OPTION: 3 ( 4) Routers 192.168.5.1 OPTION: 6 ( 4) DNS server 192.168.5.1 OPTION: 15 ( 8) Domainname home.pvt OPTION: 12 ( 3) Host name Nokia_E63 TIME: 2009-10-30 12:15:48.952 IP: 0.0.0.0 (1:22:33:44:55:66) 255.255.255.255 (ff:ff:ff:ff:ff:ff) OP: 1 (BOOTPREQUEST) HTYPE: 1 (Ethernet) HLEN: 6 ... and so on ...

    Read the article

  • What's the difference between Scala and Red Hat's Ceylon language?

    - by John Bryant
    Red Hat's Ceylon language has some interesting improvements over Java: The overall vision: learn from Java's mistakes, keep the good, ditch the bad The focus on readability and ease of learning/use Static Typing (find errors at compile time, not run time) No “special” types, everything is an object Named and Optional parameters (C# 4.0) Nullable types (C# 2.0) No need for explicit getter/setters until you are ready for them (C# 3.0) Type inference via the "local" keyword (C# 3.0 "var") Sequences (arrays) and their accompanying syntactic sugariness (C# 3.0) Straight-forward implementation of higher-order functions I don't know Scala but have heard it offers some similar advantages over Java. How would Scala compare to Ceylon in this respect?

    Read the article

  • Want to Patch your Red Hat Linux Kernel Without Rebooting?

    - by Lenz Grimmer
    Patched Tube by Morten Liebach (CC BY 2.0) Are you running Red Hat Enterprise Linux? Take back your weekend and say goodbye to lengthy maintenance windows for kernel updates! With Ksplice, you can install kernel updates while the system is running. Stay secure and compliant without the hassle. To give you a taste of one of the many features that are included in Oracle Linux Premier Support, we now offer a free 30-day Ksplice trial for RHEL systems. Give it a try and bring your Linux kernel up to date without rebooting (not even once to install it)! For more information on this exciting technology, read Wim's OTN article on using Oracle Ksplice to update Oracle Linux systems without rebooting. Watch Waseem Daher (one of the Ksplice founders) telling you more about Ksplice zero downtime updates in this screencast "Zero Downtime OS Updates with Ksplice" - Lenz

    Read the article

  • Is a company order to switch to a certain IDE a red flag?

    - by Justin Alexander
    I recently joined a rapidly growing startup. In the past 3 months the development team has grown from 4 to 12. Until now they were very laissez-faire about what developers used to do their work. In fact one of the things I initially found attractive about the company is that most programmers used Linux, or whatever OS they felt best suited their efforts. Now orders, without discussion, have come down that everyone is to switch to Eclipse. A fine editor. I prefer SublimeText2, but it's just my personal taste. Is this a red flag? It seems capricious and unreasonably controlling to tell developers (non-MS) what IDE or tool-sets to use if they are already settled in and productive.

    Read the article

  • What does this red icon on my panel mean?

    - by Ceil
    This red icon.... It showed up after I upgraded to Ubuntu 12.04LTS; I can't figure out how to ditch it. It seems to be a notification thing. I am at a loss. The icon is here: I clicked it and read this: An error occurred, please run Package Manager from the right-click menu or apt-get in a terminal to see what is wrong. The error message was: 'Error: BrokenCount0'. This usually means that your installed packages have unset dependencies.

    Read the article

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • Magento install errors

    - by nXqd
    I try to install new magento version 1.4.xx . But after the config menu I meet after I copy and change local.example.xml to local.xml . Error in file: "E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Catalog\sql\catalog_setup\mysql4-install-1.4.0.0.0.php" - SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '4-page_layout' for key 'entity_type_id' Trace: #0 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(374): Mage::exception('Mage_Core', 'Error in file: ...') #1 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(260): Mage_Core_Model_Resource_Setup->_modifyResourceDb('install', '', '1.4.0.0.21') #2 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(224): Mage_Core_Model_Resource_Setup->_installResourceDb('1.4.0.0.21') #3 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(153): Mage_Core_Model_Resource_Setup->applyUpdates() #4 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\App.php(363): Mage_Core_Model_Resource_Setup::applyAllUpdates() #5 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\App.php(295): Mage_Core_Model_App->_initModules() #6 E:\Soft\Programming\xampp\htdocs\magento\app\Mage.php(596): Mage_Core_Model_App->run(Array) #7 E:\Soft\Programming\xampp\htdocs\magento\index.php(78): Mage::run('', 'store') #8 {main} I really need your help , thanks so much :)

    Read the article

  • Generate colors between red and green for a power meter?

    - by Simucal
    I'm writing a java game and I want to implement a power meter for how hard you are going to shoot something. I need to write a function that takes a int between 0 - 100, and based on how high that number is, it will return a color between Green (0 on the power scale) and Red (100 on the power scale). Similar to how volume controls work: What operation do I need to do on the Red, Green, and Blue components of a color to generate the colors between Green and Red? So, I could run say, getColor(80) and it will return an orangish color (its values in R, G, B) or getColor(10) which will return a more Green/Yellow rgb value. I know I need to increase components of the R, G, B values for a new color, but I don't know specifically what goes up or down as the colors shift from Green-Red. Progress: I ended up using HSV/HSB color space because I liked the gradiant better (no dark browns in the middle). The function I used was (in java): public Color getColor(double power) { double H = power * 0.4; // Hue (note 0.4 = Green, see huge chart below) double S = 0.9; // Saturation double B = 0.9; // Brightness return Color.getHSBColor((float)H, (float)S, (float)B); } Where "power" is a number between 0.0 and 1.0. 0.0 will return a bright red, 1.0 will return a bright green. Java Hue Chart: Thanks everyone for helping me with this!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >