Search Results

Search found 11953 results on 479 pages for 'functional testing'.

Page 42/479 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Can someone clarify what this Joel On Software quote means?

    - by Bob
    I was reading Joel On Software today and ran across this quote: Without understanding functional programming, you can't invent MapReduce, the algorithm that makes Google so massively scalable. The terms Map and Reduce come from Lisp and functional programming. MapReduce is, in retrospect, obvious to anyone who remembers from their 6.001-equivalent programming class that purely functional programs have no side effects and are thus trivially parallelizable. What does he mean when he says functional programs have no side effects? And how does this make parallelizing trivial?

    Read the article

  • What is the advantage of currying?

    - by Mad Scientist
    I just learned about currying, and while I think I understand the concept, I'm not seeing any big advantage in using it. As a trivial example I use a function that adds two values (written in ML). The version without currying would be fun add(x, y) = x + y and would be called as add(3, 5) while the curried version is fun add x y = x + y (* short for val add = fn x => fn y=> x + y *) and would be called as add 3 5 It seems to me to be just syntactic sugar that removes one set of parentheses from defining and calling the function. I've seen currying listed as one of the important features of a functional languages, and I'm a bit underwhelmed by it at the moment. The concept of creating a chain of functions that consume each a single parameter, instead of a function that takes a tuple seems rather complicated to use for a simple change of syntax. Is the slightly simpler syntax the only motivation for currying, or am I missing some other advantages that are not obvious in my very simple example? Is currying just syntactic sugar?

    Read the article

  • Design for XML mapping scenarios between two different systems [on hold]

    - by deepak_prn
    Mapping XML fields between two systems is a mundane routine in integration scenarios. I am trying to make the design documents look better and provide clear understanding to the developers especially when we do not use XSLT or any other IDE such as jDeveloper or eclipse plugins. I want it to be a high level design but at the same time talk in developer's language. So that there is no requirements that slip under the crack. For example, one of the scenarios goes: the store cashier sells an item, the transaction data is sent to Data management system. Now, I am writing a functional design for the scenario which deals with mapping XML fields between our system and the data management system. Question : I was wondering if some one had to deal with mapping XML fields between two systems? (without XSLT being involved) and if you used a table to represent the fields mapping (example is below) or any other visualization tool which does not break the bank ? I am trying to find out if there is a better way to represent XML mapping in your design documents. The widely accepted and used method seems to be using a simple table such as in the picture to illustrate the mapping. I am wondering if there are alternate ways/ tools to represent such as in Altova:

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Breaking up classes and methods into smaller units

    - by micahhoover
    During code reviews a couple devs have recommended I break up my methods into smaller methods. Their justification was (1) increased readability and (2) the back trace that comes back from production showing the method name is more specific to the line of code that failed. There may have also been some colorful words about functional programming. Additionally I think I may have failed an interview a while back because I didn't give an acceptable answer about when to break things up. My inclination is that when I see a bunch of methods in a class or across a bunch of files, it isn't clear to me how they flow together, and how many times each one gets called. I don't really have a good feel for the linearity of it as quickly just by eye-balling it. The other thing is a lot of people seem to place a premium of organization over content (e.g. 'Look at how organized my sock drawer is!' Me: 'Overall, I think I can get to my socks faster if you count the time it took to organize them'). Our business requirements are not very stable. I'm afraid that if the classes/methods are very granular it will take longer to refactor to requirement changes. I'm not sure how much of a factor this should be. Anyway, computer science is part art / part science, but I'm not sure how much this applies to this issue.

    Read the article

  • Is the Entity Component System architecture object oriented by definition?

    - by tieTYT
    Is the Entity Component System architecture object oriented, by definition? It seems more procedural or functional to me. My opinion is that it doesn't prevent you from implementing it in an OO language, but it would not be idiomatic to do so in a staunchly OO way. It seems like ECS separates data (E & C) from behavior (S). As evidence: The idea is to have no game methods embedded in the entity. And: The component consists of a minimal set of data needed for a specific purpose Systems are single purpose functions that take a set of entities which have a specific component I think this is not object oriented because a big part of being object oriented is combining your data and behavior together. As evidence: In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. ECS, on the other hand, seems to be all about separating your data from your behavior.

    Read the article

  • Using foldr to append two lists together (Haskell)

    - by Luke Murphy
    I have been given the following question as part of a college assignment. Due to the module being very short, we are using only a subset of Haskell, without any of the syntactic sugar or idiomatic shortcuts....I must write: append xs ys : The list formed by joining the lists xs and ys, in that order append (5:8:3:[]) (4:7:[]) => 5:8:3:4:7:[] I understand the concept of how foldr works, but I am only starting off in Functional programming. I managed to write the following working solution (hidden for the benefit of others in my class...) : However, I just can't for the life of me, explain what the hell is going on!? I wrote it by just fiddling around in the interpreter, for example, the following line : foldr (\x -> \y -> x:y) [] (2:3:4:[]) which returned [2:3:4] , which led me to try, foldr (\x -> \y -> x:y) (2:3:4:[]) (5:6:7:[]) which returned [5,6,7,2,3,4] so I worked it out from there. I came to the correct solution through guess work and a bit of luck... I am working from the following definition of foldr: foldr = \f -> \s -> \xs -> if null xs then s else f (head xs) (foldr f s (tail xs) ) Can someone baby step me through my correct solution? I can't seem to get it....I already have scoured the web, and also read a bunch of SE threads, such as How foldr works

    Read the article

  • not using partial mocking? do they also mean in web-app?

    - by 01
    Im learning Mockito and in chapter 16 they say you should not use partial mocking in new system. I disagree, for example in one of my actions i use partial mocking for static framework methods, sql calls, etc. I extracted the stuff into methods and then mock it in tests. Most of those methods are specific to this action and wont be call from other actions, so it not worth to extract special components. I agree that you shouldn't using partial mocking in frameworks, but not in hard to mock actions. What are minuses of using partial mocking in web-app?

    Read the article

  • How to penetrate the QA industry after layoffs, next steps...

    - by Erik
    Briefly, my background is in manual black box testing of websites and applications within the Agile/waterfall context. Over the past four years I was a member of two web development firms' small QA teams dedicated to testing the deployment of websites for national/international non profits, governmental organizations, and for profit business, to name a few: -Brookings Institution -Senate -Tyco Electronics -Blue Cross/Blue Shield -National Geographic -Discover Channel I have a very strong understanding of the: -SDLC -STLC of bugs and website deployment/development -Use Case & Test Case development In March of this year, my last firm downsized and lost my job as a QA tester. I have been networking and doing a very detailed job search, but have had a very difficult time getting my next job within the QA industry, even with my background as a manual black box QA tester in the website development context. My direct question to all of you: What are some ways I can be more competitive and get hired? Options that could get me competitive: Should I go back to school and learn some more 'hard' skills in website development and client side technologies, e.g.: -HTML -CSS -JavaScript Learn programming: -PHP -C# -Ruby -SQL -Python -Perl -?? Get Certified as a QA Tester, there are a countless numbers of programs to become a Certified Tester. Most, if not all jobs, being advertised now require Automated Testing experience, in: -QTP -Loadrunner -Selenium -ETC. Should I learn, Automated testing skills, via a paid course, or teach myself? --Learn scripting languages to understand the automated testing process better? Become a Certified "Project Management Professional" (PMP) to prove to hiring managers that I 'get' the project development life cycle? At the end of the day I need to be competitive and get hired as a QA tester and want to build upon my skills within the QA web development field. How should I do this, without reinventing the wheel? Any help in this regard would be fabulous. Thanks! .erik

    Read the article

  • How can I effectively test a scripting engine?

    - by ChaosPandion
    I have been working on an ECMAScript implementation and I am currently working on polishing up the project. As a part of this, I have been writing tests like the following: [TestMethod] public void ArrayReduceTest() { var engine = new Engine(); var request = new ExecScriptRequest(@" var a = [1, 2, 3, 4, 5]; a.reduce(function(p, c, i, o) { return p + c; }); "); var response = (ExecScriptResponse)engine.PostWithReply(request); Assert.AreEqual((double)response.Data, 15D); } The problem is that there are so many points of failure in this test and similar tests that it almost doesn't seem worth it. It almost seems like my effort would be better spent reducing coupling between modules. To write a true unit test I would have to assume something like this: [TestMethod] public void CommentTest() { const string toParse = "/*First Line\r\nSecond Line*/"; var analyzer = new LexicalAnalyzer(toParse); { Assert.IsInstanceOfType(analyzer.Next(), typeof(MultiLineComment)); Assert.AreEqual(analyzer.Current.Value, "First Line\r\nSecond Line"); } } Doing this would require me to write thousands of tests which once again does not seem worth it.

    Read the article

  • How do you do HTML form testing without real user input simulation ?

    - by justjoe
    this question is like this one, except it's for PHP testing via browser. It's about testing your form input. Right now, i have a form on a single page. It has 12 input boxes. Every time i test the form, i have write those 12 input boxes in my browser. i know it's not a specific coding question. This question is more about how to do direct testing on your form So, how to do recursive testing without consuming too much of your time ?

    Read the article

  • Should my WCF webservice return a 500 or 200 http code (soap fault / functional return message)

    - by Tim Mahy
    Hi all, after reading the SOAP specs, it states that a SOAP Fault should return a http 500 errorcode, so when a SoapException is thrown, WCF returns a 500 error code. Now, I'm looking for some best practices to when return a functional soap error message and when to return a SOAP Fault. What would you guys return when a functional error occurred while processing the message because of the input message contains some functional errors, a 500 SOAP Fault or a 200 Soap response containing some error message ?

    Read the article

  • How do I give each test its own TestResults folder?

    - by izb
    I have a set of unit tests, each with a bunch of methods, each of which produces output in the TestResults folder. At the moment, all the test files are jumbled up in this folder, but I'd like to bring some order to the chaos. Ideally, I'd like to have a folder for each test method. I know I can go round adding code to each test to make it produce output in a subfolder instead, but I was wondering if there was a way to control the output folder location with the Visual Studio unit test framework, perhaps using an initialization method on each test class so that any new tests added automatically get their own output folder without needing copy/pasted boilerplate code?

    Read the article

  • How to extend WPF hit testing zone for a Path object.

    - by user275587
    Wpf hit testing is pretty good but the only method I found to extend the hit zone is to put a transparent padding area around your object. I can't find any method to add a transparent area arround a Path object. The path is very thin and I would like to enable hit testing if the user clicks near the path. I can't find any method to extend the path object with a transparent area like the image below : I tried to used a partially transparent stroke brush but I ran into the problem described here : http://stackoverflow.com/questions/1412833/how-can-i-draw-a-soft-line-in-wpf-presumably-using-a-lineargradientbrush I also tried to put an adorner over my line but because of WPF anti-aliasing algorithms, the position is way off when I zoom in my canvas and interfere with other objects hit-testing in a bad way. Any suggestion to extend the hit testing zone would be highly appreciated. Thanks, Kumar

    Read the article

  • loading fixtures for django tests

    - by alexarsh
    Hi, I want to use some fixtures in my tests. I have cms_sample app and a fixtures folder inside with cms_sample_data.xml I use the following in my test.py: class Funtionality(TestCase): fixtures = ['cms_sample_data'] I do use TestCase of django.tests and not unittest. But the fixtures are not loaded. What am I missing? Thanks, Arshavski Alexander.

    Read the article

  • Splitting a test to a set of smaller tests

    - by mkorpela
    I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way. I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test. An example of what I am aiming at: //Class under test class A { public void setB(B b){ this.b = b; } public Output process(Input i){ return b.process(doMyProcessing(i)); } private InputFromA doMyProcessing(Input i){ .. } .. } //Another class under test class B { public Output process(InputFromA i){ .. } .. } //The Big Test @Test public void theBigTest(){ A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive Input i = createInput(); Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive assertEquals(o, expectedOutput()); } //The splitted tests @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest1(){ // this method is a bit too long but its just an example.. Input i = createInput(); InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B b = mock(B.class); when(b.process(x)).thenReturn(expected); A classUnderTest = createInstanceOfClassA(); classUnderTest.setB(b); Output o = classUnderTest.process(i); assertEquals(o, expected); verify(b).process(x); verifyNoMoreInteractions(b); } @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest2(){ InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B classUnderTest = createInstanceOfClassB(); Output o = classUnderTest.process(x); assertEquals(o, expected); }

    Read the article

  • Resources for Test Driven Development in Web Applications?

    - by HorusKol
    I would like to try and implement some TDD in our web applications to reduce regressions and improve release quality, but I'm not convinced at how well automated testing can perform with something as fluffy as web applications. I've read about and tried TDD and unit testing, but the examples are 'solid' and rather simple functionalities like currency converters, and so on. Are there any resources that can help with unit testing content management and publication systems? How about unit testing a shopping cart/store (physical and online products)? AJAX? Googling for "Web Test Driven Development" just gets me old articles from several years ago either covering the same examples of calculator-like function or discussions about why TDD is better than anything (without any examples).

    Read the article

  • Testing with Profiler Custom Events and Database Snapshots

    We've all had them. One of those stored procedures that is huge and contains complex business logic which may or may not be executed. These procedures make it an absolute nightmare when it comes to debugging problems because they're so complex and have so many logic offshoots that it's very easy to get lost when you're trying to determine the path that the procedure code took when it ran. Fortunately Profiler lets you define custom events that you can raise in your code and capture in a trace so you get a better window into the sub events occurring in your code. I found it very useful to use custom events and a database snapshot to debug some code recently and we'll explore both in this article. I find raising these events and running Profiler to be very useful for testing my stored procedures on my own as well as when my code is going through official testing and user acceptance. It's a simple approach and a great way to catch any performance problems or logic errors.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >