Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 207/883 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • How do you handle browser cache with login/logout?

    - by Julien
    To improve performances, I'd like to add a fairly long Cache-Control (up to 30 minutes) to each page since they do not change often. However, each page also displays the name of the user logged in (like this website). The problem is when the user logs in or logs out: the user name must change. How can I change the user name after each login/logout action while keeping a long Cache-Control? Here are the solutions I can think of: Ajax request (not cached) to retrieve and display the user name. If I have 2 requests (/user?registered and /user?new), they could be cached as well. But I am afraid this extra request would nullify my caching performance-wise Add a unique URL variable (?time=) to make the URL different, and cancel the cache. However, I would have to add this variable to all links on my webpage, not very convenient code-wise This problems becomes greater if I actually have more content that is not the same for registered users and new users.

    Read the article

  • Validation Rules in Webtesting using VS2010

    - by Lexipain
    I'm creating a simple webtest (Recorded Web performance test) that makes sure that a correct error message is displayed if i try to login with a username that does not exist. However, there are two types of error messages that handle incorrect login info. One is for all the usernames that do not exist and therefore are not allowed, and the other is for usernames that start with the letter 'Q' (which is not allowed for a few reasons). Now what i want to do is use the 'Find Text' validation rule and the test should pass if ONE of the 'Find Text' parameters is found, and in that case i want the second 'Find Text' rule to be ignored so it doesn't fail the test. In other words the test should always pass if one of the 'Find Test' rules is found. How can i achieve that? Is there some if,else statement that i can use for this?

    Read the article

  • JPA entitymanager remove operation is not performant

    - by Samuel
    When I try to do an entityManager.remove(instance) the underlying JPA provider issues a separate delete operation on each of the GroupUser entity. I feel this is not right from a performance perspective, since if a Group has 1000 users there will be 1001 calls issued to delete the entire group and itr groupuser entity. Would it make more sense to write a named query to remove all entries in groupuser table (e.g. delete from group_user where group_id=?), so I would have to make just 2 calls to delete the group. @Entity @Table(name = "tbl_group") public class Group { @OneToMany(mappedBy = "group", cascade = CascadeType.ALL, fetch = FetchType.LAZY) @Cascade(value = DELETE_ORPHAN) private Set<GroupUser> groupUsers = new HashSet<GroupUser>(0);

    Read the article

  • Is testability alone justification for dependency injection?

    - by fearofawhackplanet
    The advantages of DI, as far as I am aware, are: Reduced Dependencies More Reusable Code More Testable Code More Readable Code Say I have a repository, OrderRepository, which acts as a repository for an Order object generated through a Linq to Sql dbml. I can't make my orders repository generic as it performs mapping between the Linq Order entity and my own Order POCO domain class. Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, parameter passing of the DataContext can't really be said to make the code reuseable or reduce dependencies in any meaningful way. It also makes the code harder to read, as to instantiate the repository I now need to write new OrdersRepository(new MyLinqDataContext()) which additionally is contrary to the main purpose of the repository, that being to abstract/hide the existence of the DataContext from consuming code. So in general I think this would be a pretty horrible design, but it would give the benefit of facilitating unit testing. Is this enough justification? Or is there a third way? I'd be very interested in hearing opinions.

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • How to determine cpu, ram needed for rails app?

    - by Ben
    What is the most accurate way to determine the amount of cpu speed and ram needed to run my rails app? I believe there are stress testing tools like Tsung, but how do I determine, for example, that I need X more ram, or X more CPU? I would like to find some way to roughly gauge the performance needs of my application so I can anticipate future needs. I think this data will also be useful for me to decide whether to upgrade one machine, or get another dedicated machine and put all the databases on that one. Essentially, I am concerned about scaling issues, and how to anticipate them. Thanks in advance for the help!

    Read the article

  • Clojure / HBase: How to Import HBaseTestingUtility in v0.94.6.1

    - by David Williams
    In Clojure, if I want to start a test cluster using the hbase testing utility, I have to annotate my dependencies with: [org.apache.hbase/hbase "0.92.2" :classifier "tests" :scope "test"] First of all, I have no idea what this means. According to leiningens sample project.clj ;; Dependencies are listed as [group-id/name version]; in addition ;; to keywords supported by Pomegranate, you can use :native-prefix ;; to specify a prefix. This prefix is used to extract natives in ;; jars that don't adhere to the default "<os>/<arch>/" layout that ;; Leiningen expects. Question 1: What does that mean? Question 2: If I upgrade the version: [org.apache.hbase/hbase "0.94.6.1" :classifier "tests" :scope "test"] Then I receive a ClassNotFoundException Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration Whats going on here and how do I fix it?

    Read the article

  • How can I detect a debugger or other tool that might be analysing my software?

    - by Workshop Alex
    A very simple situation. I'm working on an application in Delphi 2007 which is often compiled as 'Release' but still runs under a debugger. And occasionally it will run under SilkTest too, for regression testing. While this is quite fun I want to do something special... I want to detect if my application is running within a debugger/regression-tester and if that's the case, I want the application to know which tool is used! (Thus, when the application crashes, I could report this information in it's error report.) Any suggestions, solutions?

    Read the article

  • How to cache and store objects and set an expire policy in android?

    - by virsir
    I have an app fetch data from internet, for better performance and bandwidth, I need to implement a cache layer. There are two different data coming from the internet, one is changing every one hour and another one does not change basically. So for the first type of data, I need to implement an expire policy to make it self deleted after it was created for 1 hour, and when user request that data, I will check the storage first and then goto internet if nothing found. I thought about using a SharedPrefrence or SQLDatabase to store the json data or serialized object string. My question is: 1) What should I use, SharedPrefrence or SQLDatabase or anything else, a piece of data is not big but there are maybe many instances of that data. 2) How to implement that expire system.

    Read the article

  • Online product demo environment for Windows applications

    - by Stefanos Tses
    I'm looking for a way to allow potential customers to try my application before they buy it. The product is a windows forms application that requires an SQL Server database to operate. Although I have a functional demo that the customer can install on their network, I want to make it easier for them by have them "play" with it at my environment. I remember Microsoft had (has?) something similar. I was testing Visual Studio a few years ago in a virtual environment where I was connecting to a server at Microsoft. Any suggestions? Thanks.

    Read the article

  • Browser caching issue on a https site pressing f5

    - by sushil bharwani
    i am working on a website where i have content entry form. This form contains a tiny mce control. The control is composed of some 40-50 files. The testing reported that the entry form loads slow and evertime shows up 50 files loading to completely load the page. Is there a way i can decrease this time. I have taken help of browser caching by setting the expires header of static content to very far date. When i access the form through its link second or later times it loads fast without saying 40 files remaining. but when i do f5 it reloads the entire page. I m confused as to how is f5 different from clicking on the link. Just to add my url is https.Any suggestion to increase the performance of this form will be great.

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • Would watching a file for changes or redundantly querying that file be more efficient?

    - by badpanda
    I am wondering whether watching a file/directory for changes using the FileSystemWatcher class is extremely memory intensive. I am developing a desktop application in C# that will be running behind the scenes continuously on low-performance computers, and I need some way of checking to see if various files have changed. I can think of a few solutions: Watch the directories using FileSystemWatcher. Run a timed thread on an interval that goes through and manually checks this. Check manually every time the actionhandler thread runs (the program will occasionally do something, on an action). Any suggestions? Thanks! badPanda

    Read the article

  • Page zoom slows down page rendering

    - by Alex
    I'm building a map viewer much like Google maps and i've run into an interesting performance problem when a page is zoomed (i.e ctrl + OR ctrl -). It seems to affect all major browsers but Firefox has the worst problems as far as I can tell. The problem is that when the page is zoomed panning by dragging the mouse seems really sluggish. This can even be seen on Google maps. Pan the map left and right and note how smooth it is. Now press ctrl+ (3 or 4 times). Now pan the map left and right in the same way. Notice the difference? Does anyone know how I can minimize this problem?

    Read the article

  • Generic test suite for ASP.NET Membership/Role/Profile/Session Providers

    - by SztupY
    Hi! I've just created custom ASP.NET Membership, Role, Profile and Session State providers, and I was wondering whether there exists a test suite or something similar to test the implementation of the providers. I've checked some of the open source providers I could find (like the NauckIt.PostgreSQL provider), but neither of them contained unit tests, and all of the forum topics I've found mentioned only a few test cases (like checking whether creating a user works), but this is clearly not a complete test suite for a Membership provider. (And I couldn't find anything for the other three providers) Are there more or less complete test suites for the above mentioned providers, or are there custom providers out there that have at least some testing avaialable?

    Read the article

  • How doe we name test methods where we are checking for more than one condition?

    - by Sandbox
    I follow the technique specified in Roy Osherove's The Art Of Unit Testing book while naming test methods - MethodName_Scenario_Expectation. It suits perfectly well for my 'unit' tests. But,for tests that I write in 'controller' or 'coordinator' class, there isn't necessarily a method which I want to test. For these tests, I generate multiple conditions which make up one scenario and then I verify the expectation. For example, I may set some properties on different instances, generate an event and then verify that my expectations from controller/coordinator is being met. Now, my controller handles events using a private event handler. Here my scenario is that, I set some properties, say 3 condition1,condition2 and condition3 Also, my scenario includes an event is raised I don't have a method name as my event handler is private. How do I name such a test method?

    Read the article

  • How to add a weather info to be evalueated only once???

    - by Savvas Sopiadis
    Hi everybody! In a ASP.MVC (1.0) project i managed to get weather info from a RSS feed and to show it up. The problem i have is performance: i have put a RenderAction() Method in the Site.Master file (which works perfectly) but i 'm worried about how it will behave if a user clicks on menu point 1, after some seconds on menu point 2, after some seconds on menu point 3, .... thus making the RSS feed requesting new info again and again and again! Can this somehow be avoided? (to somehow load this info only once?) Thanks in advance!

    Read the article

  • Should I trust Redis for data integrity?

    - by Jiaji
    In my current project, I have PostgreSQL as my master DB, and Redis as kind of a slave, e.g., when some user adds another as a friend, first the relationship will be stored in PostgreSQL and then a friend list in Redis will be updated. When some user's friend list is requested, it will be pulled out of Redis instead of PostgreSQL. The question is: when I update the friend list in Redis, should I get a fresh copy outof PostgreSQL, and replace the old list in Redis with the new one or should I keep the old list and simply SADD the userid into the list? The latter is of course best for performance, but intuitively the former does a better job in keep the data integrity? And if something like Celery is used, is the second method worth the risk?

    Read the article

  • Setting up functional Tests in Flex

    - by Dan Monego
    I'm setting up a functional test suite for an application that loads an external configuration file. Right now, I'm using flexunit's addAsync function to load it and then again to test if the contents point to services that exist and can be accessed. The trouble with this is that having this kind of two (or more) stage method means that I'm running all of my tests in the context of one test with dozens of asserts, which seems like a kind of degenerate way to use the framework, and makes bugs harder to find. Is there a way to have something like an asynchronous setup? Is there another testing framework that handles this better?

    Read the article

  • How to simulate a mouse click in Cocoa for the iPhone?

    - by eagle
    I'm trying to setup automated unit tests for an iPhone application. I'm using a UIWebBrowser and need to simulate clicks on different links. I've tried doing this with JavaScript, but it doesn't produce the same results as when the I manually click on the links. The main problem is with links that have their target property set. I believe the only way for this automated unit test to work correctly is to simulate a mouse click at a specific x/y coordinate (i.e. where the link is located). Since the unit testing will only be used internally, private API calls are fine. It seems like this should be possible since the iPhone app isimulate seems to do something similar. Is there any way to do this in the framework?

    Read the article

  • C# / Visual Studio: production and test code placement

    - by Patrick Linskey
    Hi, In JavaLand, I'm used to creating projects that contain both production and test code. I like this practice because it simplifies testing of internal code without artificially exposing the internals in a project's published API. So far, in my experiences with C# / Visual Studio / ReSharper / NUnit, I've created separate projects (i.e., separate DLLs) for production and test code. Is this the idiom, or am I off base? If this idiomatically correct, what's the right way to deal with exposing classes and methods for test purposes? Thanks, -Patrick

    Read the article

  • Setting up a web developer lab for learning purposes

    - by Saleh Al-Abbas
    I'm not a developer by profession. Therefore, I'm not exposed to real world technical problems that face professional developers. I read/heard about web farms, integration between different systems, load balancing ... etc. Therefore, I was wondering if there are ways for the individual developer to create an environment that simulates real world situations with minimal number of machines like: web farms & caching simulating many users accessing your website (Pressure tests?) Performance load balancing anything you think I should consider. By the way, I have a server machine and 1 PC. and I don't mind investing in tools and software. PS. I'm using Microsoft technologies for development but I hope this is not a limiting factor. Thanks

    Read the article

  • How to populate an array with recordset data

    - by Curtis Inderwiesche
    I am attempting to move data from a recordset directly into an array. I know this is possible, but specifically I want to do this in VBA as this is being done in MS Access 2003. Typically I would do something like the following to archive this: Dim vaData As Variant Dim rst As ADODB.Recordset ' Pull data into recordset code here... ' Populate the array with the whole recordset. vaData = rst.GetRows What differences exist between VB and VBA which makes this type of operation not work? What about performance concerns? Is this an "expensive" operations?

    Read the article

  • Windows Azure Table Storage LINQ Operators

    - by Ryan Elkins
    Currently Table Storage supports From, Where, Take, and First. Are there plans to support any of the other 29 operators? If we have to code for these ourselves, how much of a performance difference are we looking at to something similar via SQL and SQL Server? Do you see it being somewhat comparable or will it be far far slower if I need to do a Count or Sum or Group By over a gigantic dataset? I like the Azure platform and the idea of cloud based storage. I like Windows Azure for the amount of data it can store and the schema-less nature of table storage. SQL Azure just won't work due to the high cost to storage space.

    Read the article

  • Must .aspx files have a page directive?

    - by Keith Bloom
    Around 90% of the pages for our websites have no .Net code embedded in them yet are published as .aspx files. I want these to render as fast as possible so I'm removing as much as I can. Does the .Net page directive have an impact on performance? I am thinking about two factors; the page speed for each GET and what happens when the file changes. The CMS system re-creates each page daily and I'm wondering if this triggers the ASP.Net compilation process.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >