Search Results

Search found 58378 results on 2336 pages for 'real application clusters'.

Page 11/2336 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Task-It Webinar - Building a real-world application with RadControls for Silverlight 4

    Yesterday I held a live webinar on Building a real-world application with RadControls for Silverlight 4. Thank you to all of those that attended, but if you did not have a chance to catch it, you can watch a recorded version here: Building a real-world application with RadControls for Silverlight 4 I wasn't able to get too deep into the inner workings of the app because of time limitations, but over the upcoming weeks I will dig deeper in my blog posts, and potentially some videos. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ADF Real World Developers Guide Book Review

    - by Grant Ronald
    I'm half way through my review of "Oracle ADF Real World Developer's Guide" by Jobinesh Purushothaman - unfortunately some work deadlines de-railed me from having completed my review by now but here goes.  First thing, Jobinesh works in the Oracle Product Management team with me, so is a colleague. That declaration aside, its clear that this is someone who has done the "real world" side of ADF development and that comes out in the book. In this book he addresses both the newbies and the experience developers alike.  He introduces the ADF building blocks like entity objects and view obejcts, but also goes into some of the nitty gritty details as well.  There is a pro and con to this approach; having only just learned about an entity or view object, you might then be blown away by some of the lower details of coding or lifecycle.  In that respect, you might consider this a book which you could read 3 or 4 times; maybe skipping some elements in the first read but on the next read you have a better grounding to learn the more advanced topics. One of the key issues he addresses is breaking down what happens behind the scenes.  At first, this may not seem important since you trust the framework to do everything for you - but having an understanding of what goes on is essential as you move through development.  For example, page 58 he explains the full lifecycle of what happens when you execute a query.  I think this is a great feature of his book. You see this elsewhere, for example he explains the full lifecycle of what goes on when a page is accessed : which files are involved,the JSF lifecycle etc. He also sprinkes the book with some best practices and advice which go beyond the standard features of ADF and really hits the mark in terms of "real world" advice. So in summary, this is a great ADF book, well written and covering a mass of information.  If you are brand new to ADF its still valid given it does start with the basics.  But you might want to read the book 2 or 3 times, skipping the advanced stuff on the first read.  For those who have some basics already then its going to be an awesome way to cement your knowledge and take it to the next levels.  And for the ADF experts, you are still going to pick up some great ADF nuggets.  Advice: every ADF developer should have one!

    Read the article

  • Calling a WPF Application and modify exposed properties?

    - by Justin
    I have a WPF Keyboard Application, it is developed in such a way that an application could call it and modify its properties to adapt the Keyboard to do what it needs to. Right now I have a file *.Keys.Set which tells the application (on open) to style itself according to that new style. I know this file could be passed as a command line argument into the application. That would not be a problem. My concern is, is there a way via a managed environment to change the properties of the executable as long as they are exposed properly, an example: 'Creates a new instance of the Keyboard Application Dim e_key as new WpfApplication("C:\egt\components\keyboard.exe") 'Sets the style path e_key.SetStylePath("c:\users\joe\apps\me\default.keys.set") e_key.Refresh() 'Applies the style e_key.HideMenu() 'Hides the menu e_key.ShowDeck("PIN") 'Shows the custom "deck" of keyboard keys the developer 'Created in the style application. ''work with events and response 'Clear the instance from memory e_key.close e_key.dispose e_key = nothing This would allow my application to become easily accessible to other Touch Screen Application Developers, allowing them to use my keyboard and keep the functionality they need. It seems like it might be possible because (name of executable).application shows all the exposed functions, properties, and values. I just have never done this before. Any help would be appreciated, thank you in advance.

    Read the article

  • What is the Oracle Utilities Application Framework?

    - by Anthony Shorten
    The Oracle Utilities Application Framework is a reusable, scalable and flexible java based framework which allows other products to be built, configured and implemented in a standard way. Note: Even though the Framework is built in java it can be integrated with COBOL based extensions for backward compatibility. When Oracle Utilities Customer Care & Billing was migrated from V1 to V2, it was decided that the technical aspects of that product be separated to allow for reuse and independence from technical issues. The idea was that all the technical aspects would be concentrated in this separate product (i.e. a framework) and allow all products using the framework to concentrate on delivering superior functionality. The product was named the Oracle Utilities Application Framework (oufw is the product code). The technical components are contained in the Oracle Utilities Application Framework which can be summarized as follows: Metadata - The Oracle Utilities Application Framework is responsible for defining and using the metadata to define the runtime behavior of the product. All the metadata definition and management is contained within the Oracle Utilities Application Framework. UI Management - The Oracle Utilities Application Framework is responsible for defining and rendering the pages and responsible for ensuring the pages are in the appropriate format for the locale. Integration - The Oracle Utilities Application Framework is responsible for providing the integration points to the architecture. Refer to the Oracle Utilities Application Framework Integration Overview for more details Tools - The Oracle Utilities Application Framework provides a common set of facilities and tools that can be used across all products. Technology - The Oracle Utilities Application Framework is responsible for all technology standards compliance, platform support and integration. There are a number of products from the Tax and Utilities Global Business Unit as well as from the Financial Services Global Business Unit that are built upon the Oracle Utilities Application Framework. These products require the Oracle Utilities Application Framework to be installed first and then the product itself installed onto the framework to complete the installation process. There are a number of key benefits that the Oracle Utilities Application Framework provides to these products: Common facilities - The Oracle Utilities Application Framework provides a standard set of technical facilities that mean that products can concentrate in the unique aspects of their markets rather than making technical decisions. Common methods of configuration - The Oracle Utilities Application Framework standardizes the technical configuration process for a product. Customers can effectively reuse the configuration process across products. Multi-lingual and Multi-platform - The Oracle Utilities Application Framework allows the products to be offered in more markets and across multiple platforms for maximized flexibility. Common methods of implementation - The Oracle Utilities Application Framework standardizes the technical aspects of a product implementation. Customers can effectively reuse the technical implementation process across products. Quicker adoption of new technologies - As new technologies and standards are identified as being important for the product line, they can be integrated centrally benefiting multiple products. Cross product reuse - As enhancements to the Oracle Utilities Application Framework are identified by a particular product, all products can potentially benefit from the enhancement. Note: Use of the Oracle Utilities Application Framework does not preclude the introduction of product specific technologies or facilities to satisfy market needs. The framework minimizes the need and assists in the quick integration of a new product specific piece of technology (if necessary). The Framework is not available as a product itself and is bundled with Tax and Utilities Global Business Unit prodicts. At the present time the following products are on the Framework: Oracle Utilities Customer Care And Billing (V2 and above) Oracle Enterprise Taxation Management (V2 and above) Oracle Utilities Business Intelligence (V2 and above) Oracle Utilities Mobile Workforice Management (V2 and above)

    Read the article

  • Application Logging needs work

    Application Logging Application logging is the act of logging events that occur within an application much like how a court report documents what happens in court case. Application logs can be useful for several reasons, but the most common use for logs is to recreate steps to find the root cause of applications errors. Other uses can include the detection of Fraud, verification of user activity, or provide audits on user/data interactions. “Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. “ (OWASP, 2009) OWASP also stats that logging include applicable debugging information like the event date time, responsible process, and a description of the event. “There are many reasons why a logging system is a necessary part of delivering a distributed application. One of the most important is the ability to track exactly how many users are using the application during different time periods.” (Hatton, 2000) Hatton also states that application logging helps system designers determine whether parts of an application aren't being used as designed. He implies that low usage can be used to identify if users like or do not like aspects of a system based on user usage of the application. This enables application designers to extract why users don't like aspects of an application so that changes can be made to increase its usefulness and effectiveness. “Logging memory usage can also assist you in tuning up the internals of your application. If you're experiencing a randomly occurring problem, being able to match activities performed with the memory status at the time may enable you to discover the cause of the problem. It also gives you a good indication of the health of the distributed server machine at the time any activity is performed. “ (Hatton, 2000) Commonly Logged Application Events (Defined by OWASP) Access of Data Creation of Data Modification of Data in any form Administrative Functions  Configuration Changes Debugging Information(Application Events)  Authorization Attempts  Data Deletion Network Communication  Authentication Events  Errors/Exceptions Application Error Logging The functionality associated with application error logging is actually the combination of proper error handling and applications logging.  If we look back at Figure 4 and Figure 5, these code examples allow developers to handle various types of errors that occur within the life cycle of an application’s execution. Application logging can be applied within the Catch section of the TryCatch statement allowing for the errors to be logged when they occur. By placing the logging within the Catch section specific error details can be accessed that help identify the source of the error, the path to the error, what caused the error and definition of the error that occurred. This can then be logged and reviewed at a later date in order recreate the error that was received based data found in the application log. By allowing applications to log errors developers IT staff can use them to recreate errors that are encountered by end-users or other dependent systems.

    Read the article

  • IIS7.5 Domain Account Application Pool Identity for SQL Server Authentication

    - by Gareth Hill
    In Windows Server 2003/IIS6 land we typically create an app pool that runs as the identity of an AD account created with minimal privileges simply for that purpose. This same domain user would also be granted access to SQL Server so that any ASP.NET application in that app pool would be able to connect to SQL Server with Integrated Security=SSPI. We are making a brave move to the world of Windows Server 2008 R2/IIS7.5 and are looking to replicate this model, but I am struggling with how to make the application pool in IIS7.5 run as the identity of an AD account? I know this sounds simple and hopefully it is, but my attempts so far have been fruitless. Should the application pool identity be a 'Custom account' for a domain account? Does the domain account need to be added to any groups?

    Read the article

  • application monitoring tools

    - by Shachar
    we're an ISV about to deploy our SaaS application over the internet to our end users, and are currently looking for an application monitoring solution. In addition to monitoring the usual OS-level suspects (I/O, disk space, logs, CPU, RAM, swapping, etc.), we're also looking to monitor, alert and report on internal application events, conditions, and counters (think queue size for internal service, or latency of a service we're getting from a third party via custom APIs). We're started looking at Nagios, Zenoss, etc., but found out those do only low-level stuff, and are currently looking at MOM and ManageEngine. Still, they are far from being an custom app monitoring tool. So - do you have anything to suggest?

    Read the article

  • 500 Error when using custom account for application pool in IIS 7

    - by Brownie
    I have a very simple site with only static files in IIS 7 on Windows Server 2008 SP2. When I try to access any static file I get a 500 error. If I rename an html file to have an aspx extension it works fine. The site also works fine when using the built in identity for the application pool. The problem occurs when I switch to using a custom account for the application pool. I have tried using both local and domain accounts to run the application pool under. I have given full control to these accounts on the website directory and files. Turning on tracing reveals this error message: ModuleName: IIS Web Core Notification: 2 HttpStatus: 500 HttpReason: Internal Server Error HttpSubStatus: 0 ErrorCode: 2147943746 ConfigExceptionInfo Notification: AUTHENTICATE_REQUEST ErrorCode: Either a required impersonation level was not provided, or the provided impersonation level is invalid. (0x80070542) I have not had any luck with googling the error code.

    Read the article

  • IIS7.5 Domain Account Application Pool Identity for SQL Server Authentication

    - by user38652
    In Windows Server 2003/IIS6 land we typically create an app pool that runs as the identity of an AD account created with minimal privileges simply for that purpose. This same domain user would also be granted access to SQL Server so that any ASP.NET application in that app pool would be able to connect to SQL Server with Integrated Security=SSPI. We are making a brave move to the world of Windows Server 2008 R2/IIS7.5 and are looking to replicate this model, but I am struggling with how to make the application pool in IIS7.5 run as the identity of an AD account? I know this sounds simple and hopefully it is, but my attempts so far have been fruitless. Should the application pool identity be a 'Custom account' for a domain account? Does the domain account need to be added to any groups?

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Shutting down a WPF application from App.xaml.cs

    - by Johannes Rössel
    I am currently writing a WPF application which does command-line argument handling in App.xaml.cs (which is necessary because the Startup event seems to be the recommended way of getting at those arguments). Based on the arguments I want to exit the program at that point already which, as far as I know, should be done in WPF with Application.Current.Shutdown() or in this case (as I am in the current application object) probably also just this.Shutdown(). The only problem is that this doesn't seem to work right. I've stepped through with the debugger and code after the Shutdown() line still gets executed which leads to errors afterwards in the method, since I expected the application not to live that long. Also the main window (declared in the StartupUri attribute in XAML) still gets loaded. I've checked the documentation of that method and found nothing in the remarks that tell me that I shouldn't use it during Application.Startup or Application at all. So, what is the right way to exit the program at that point, i. e. the Startup event handler in an Application object?

    Read the article

  • How do I get the current Application Name (in terms of IIS) in a classic asp Web application

    - by Mr AH
    I have a classic asp application which retrieves the current application name and sets an Application variable containing that name. This name is important (I wont go into why) and is essentially the friendly name in IIS. The problem is, the implementation used to get this name is flawed, it a) assumes the home directory contains the string wwwroot, and b) assumes the folder name is the same as the application name. I can no longer guarantee these conditions. I would have thought the application name is know at run-time but I can't seem to find it in either Session or Application variables (at application start up entry point in global.asa). Any ideas?

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Monitoring ASP.NET Application

    - by imran_ku07
        Introduction:          There are times when you may need to monitor your ASP.NET application's CPU and memory consumption, so that you can fine-tune your ASP.NET application(whether Web Form, MVC or WebMatrix). Also, sometimes you may need to see all the exceptions(and their details) of your application raising, whether they are handled or not. If you are creating an ASP.NET application in .NET Framework 4.0, then you can easily monitor your application's CPU or memory consumption and see how many exceptions your application raising. In this article I will show you how you can do this.       Description:           With .NET Framework 4.0, you can turn on the monitoring of CPU and memory consumption by setting AppDomain.MonitoringEnabled property to true. Also, in .NET Framework 4.0, you can register a callback method to AppDomain.FirstChanceException event to monitor the exceptions being thrown within your application's AppDomain. Turning on the monitoring and registering a callback method will add some additional overhead to your application, which will hurt your application performance. So it is better to turn on these features only if you have following properties in web.config file,   <add key="AppDomainMonitoringEnabled" value="true"/> <add key="FirstChanceExceptionMonitoringEnabled" value="true"/>             In case if you wonder what does FirstChanceException mean. It simply means the first notification of an exception raised by your application. Even CLR invokes this notification before the catch block that handles the exception. Now just update global.asax.cs file as,   string _item = "__RequestExceptionKey"; protected void Application_Start() { SetupMonitoring(); } private void SetupMonitoring() { bool appDomainMonitoringEnabled, firstChanceExceptionMonitoringEnabled; bool.TryParse(ConfigurationManager.AppSettings["AppDomainMonitoringEnabled"], out appDomainMonitoringEnabled); bool.TryParse(ConfigurationManager.AppSettings["FirstChanceExceptionMonitoringEnabled"], out firstChanceExceptionMonitoringEnabled); if (appDomainMonitoringEnabled) { AppDomain.MonitoringIsEnabled = true; } if (firstChanceExceptionMonitoringEnabled) { AppDomain.CurrentDomain.FirstChanceException += (object source, FirstChanceExceptionEventArgs e) => { if (HttpContext.Current == null)// If no context available, ignore it return; if (HttpContext.Current.Items[_item] == null) HttpContext.Current.Items[_item] = new RequestException { Exceptions = new List<Exception>() }; (HttpContext.Current.Items[_item] as RequestException).Exceptions.Add(e.Exception); }; } } protected void Application_EndRequest() { if (Context.Items[_item] != null) { //Only add the request if atleast one exception is raised var reqExc = Context.Items[_item] as RequestException; reqExc.Url = Request.Url.AbsoluteUri; Application.Lock(); if (Application["AllExc"] == null) Application["AllExc"] = new List<RequestException>(); (Application["AllExc"] as List<RequestException>).Add(reqExc); Application.UnLock(); } }               Now browse to Monitoring.cshtml file, you will see the following screen,                            The above screen shows you the total bytes allocated, total bytes in use and CPU usage of your application. The above screen also shows you all the exceptions raised by your application which is very helpful for you. I have uploaded a sample project on github at here. You can find Monitoring.cshtml file on this sample project. You can use this approach in ASP.NET MVC, ASP.NET WebForm and WebMatrix application.       Summary:          This is very important for administrators/developers to manage and administer their web application after deploying to production server. This article will help administrators/developers to see the memory and CPU usage of their web application. This will also help administrators/developers to see all the exceptions your application is throwing whether they are swallowed or not. Hopefully you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • Load balancing application servers with Alteon 2424-SSL

    - by antispam
    We are having problems with load balancing configuration and we would like to clear the situation. We need to load balance among four JavaEE web application servers. The servers are configured as host1 port 7001 host1 port 7002 host2 port 7001 host2 port 7002 Do any of you know if it is possible with Nortel 2424-SSL application switch? Which would be the best configuration for it? (vips, ports, groups, services, ...) Thank you very much.

    Read the article

  • is GTK Installation (PHP for desktop) affect the web application?

    - by Harshal Mahajan
    I just going to install the GTK for creating a desktop application. But I want to know if we install the GTK then is it affect our web application server or php.ini or other features of web based application? I know there is no requirement of server for desktop but the GTK create the other php.ini . so is it affect my other applications? I downloaded the GTK Tool kit from here. So I am just little bit confusing that it should not affect my all running web applications. I think the php for desktop is a very interesting issue for all of us, so I just want to know the affection of desktop on web?

    Read the article

  • Mailing List Application?

    - by marienbad
    Mailing lists are great, but they're a fundamentally different beast than email. It seems strange to me to keep mailing lists in my email program (Gmail). Of course I have folders set up to automatically keep them out of my inbox, but if I have hundreds of mailing lists, it gets really out of hand. Is there an application (or web application) that is designed specifically as a mailing list "client"?

    Read the article

  • launch an application from HTML with arguments

    - by Jugglingnutcase
    Is there a way to allow an HTML file to open an application on the local computer and send that application arguments? We have an application that allows a user to set a link to an external application. We also provide a summary page in HTML (they usually interact with the application from outside the browser) with the link in HTML as well. We can get applications to launch if the program exists, but cant seem to send arguments through the HTML link. Is this even possible?

    Read the article

  • Silverlight 4 business application themes

    - by David Brunelle
    Hi, We are starting a new SilverLight 4 Business Application project and are looking for theme. All we can find on the web are Navigation Application themes, which when applied to business application project, don't work. Most even have compilation errors. Is there a place on the web to get theme specifically for that project or is there a way to translate navigation application theme into business application theme? Thank you

    Read the article

  • ASP.NET Application Level vs. Session Level and Global.asax...confused

    - by contactmatt
    The following text is from the book I'm reading, 'MCTS Self-Paced Training Kit (Exam 70-515) Web Applications Development with ASP.NET 4". It gives the rundown of the Application Life Cycle. A user first makes a request for a page in your site. The request is routed to the processing pipeline, which forwards it to the ASP.NET runtime. The ASP.NET runtime creates an instance of the ApplicationManager class; this class instance represents the .NET framework domain that will be used to execute requests for your application. An application domain isolates global variables from other applications and allows each application to load and unload separately, as required. After the application domain has been created, an instance of the HostingEnvironment class is created. This class provides access to items inside the hosting environment, such as directory folders. ASP.NET creates instances of the core objects that will be used to process the request. This includes HttpContext, HttpRequest, and HttpResponse objects. ASP.NET creates an instance of the HttpApplication class (or an instance is reused). This class is also the base class for a site’s Global.asax file. You can use this class to trap events that happen when your application starts or stops. When ASP.NET creates an instance of HttpApplication, it also creates the modules configured for the application, such as the SessionStateModule. Finally, ASP.NET processes request through the HttpApplication pipleline. This pipeline also includes a set of events for validating requests, mapping URLs, accessing the cache, and more. The book then demonstrated an example of using the Global.asax file: <script runat="server"> void Application_Start(object sender, EventArgs e) { Application["UsersOnline"] = 0; } void Session_Start(object sender, EventArgs e) { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] + 1; Application.UnLock(); } void Session_End(object sender, EventArgs e) { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] - 1; Application.UnLock(); } </script> When does an application start? Whats the difference between session and application level? I'm rather confused on how this is managed. I thought that Application level classes "sat on top of" an AppDomain object, and the AppDomain contained information specific to that Session for that user. Could someone please explain how IIS manages Applicaiton level classes, and how an HttpApplication class sits under an AppDomain? Anything is appreciated.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >