Search Results

Search found 48586 results on 1944 pages for 'page performance'.

Page 221/1944 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Good way to capture/replay sessions from Apache Log?

    - by Mark Harrison
    For performance testing, I would like to capture some traffic from a production server and use that as a basis to replay the request to a test server in order to simulate a realistic load in our development environment. These are all stateless queries, so no issues regarding cookies, sessions, etc. The Apache log timestamps everything down to a 1 second resolution, but that's not fine enough granularity for our peak times. What's the best way to capture more fine-grained timestamps for replay? And is there some ab-like load generating program that can use this data to replicate load?

    Read the article

  • What happens before applicationDidFinishLaunching is invoked?

    - by nefsu
    I'm doing performance testing on my iphone app and I'm noticing that sometimes a good 3-4 secs elapse at startup before I start seeing my NSLogs from applicationDidFinishLaunching. I've optimized what happens once the code enters applicationDidFinishLaunching but I'm not sure how to optimize what goes on before that. I'm using a Default.png splash screen so it basically just stalls on that screen before it enters applicationDidFinishLaunching and starts doing something. Just to give you guys some context, I have no nib files and I'm using core animation, if that makes any difference. I have about 10 different controllers and my total bundle size is just under 2MBs.

    Read the article

  • Java 'Prototype' pattern - new vs clone vs class.newInstance

    - by Guillaume
    In my project there are some 'Prototype' factories that create instances by cloning a final private instance. The author of those factories says that this pattern provides better performance than calling 'new' operator. Using google to get some clues about that, I've found nothing really relevant about that. Here is a small excerpt found in a javdoc from an unknown project javdoc from an unknown project Sadly, clone() is rather slower than calling new. However it is a lot faster than calling java.lang.Class.newInstance(), and somewhat faster than rolling our own "cloner" method. For me it's looking like an old best practice of the java 1.1 time. Does someone know more about this ? Is this a good practice to use that with 'modern' jvm ?

    Read the article

  • optimal memory layout for read-only/write memory segments.

    - by aaa
    hello. Suppose I have two memory segments (equal size each, approximately 1kb in size) , one is read-only (after initialization), and other is read/write. what is the best layout in memory for such segments in terms of memory performance? one allocation, contiguous segments or two allocations (in general not contiguous). my primary architecture is linux Intel 64-bit. my feeling is former (cache friendlier) case is better. is there circumstances, where second layout is preferred? Thanks

    Read the article

  • How do I get Java to use my multi-core processor?

    - by Rudiger
    I'm using a GZIPInputStream in my program, and I know that the performance would be helped if I could get Java running my program in parallel. In general, is there a command-line option for the standard VM to run on many cores? It's running on just one as it is. Thanks! Edit I'm running plain ol' Java SE 6 update 17 on Windows XP. Would putting the GZIPInputStream on a separate thread explicitly help? No! Do not put the GZIPInputStream on a separate thread! Do NOT multithread I/O! Edit 2 I suppose I/O is the bottleneck, as I'm reading and writing to the same disk... In general, though, is there a way to make GZIPInputStream faster? Or a replacement for GZIPInputStream that runs parallel? Edit 3 Code snippet I used: GZIPInputStream gzip = new GZIPInputStream(new FileInputStream(INPUT_FILENAME)); DataInputStream in = new DataInputStream(new BufferedInputStream(gzip));

    Read the article

  • Best solution for reporting database

    - by zzyzx
    Here is the situation: There is a transaction intensive database - used for both routine transactions and reports. I was wondering if I could isolate these two operations and 2 independent databases, so reports could run off of one database and all the transactions could occur in another one. This would improve performance for the OLTP SQL database. I have gone over a few options like, Mirroring, Log shipping, Replication, Snapshots, Clustering - but would like to discuss the best possible strategy for the desired result. Please advise the best solution to implement this strategy, or any other thoughts/suggestion you may have.

    Read the article

  • Smartest way to draw 100+ images on screen

    - by Henrik
    Hello all, First of all I'm a newbie when it comes to Android programming. So if this question is totally stupid please delete it ASAP :-). Question: I'm going to draw a grid of 10x10 PNG images. Each image is 32x32 px. All of the images are unique. I'm thinking that the easiest way seems to be to put each image in an ImageView. If adding all ImageView's to the layout would this give me some kind of performance hit? Would there be any smarter way to draw these images? Thanks. / Henrik

    Read the article

  • Open source / commercial alternative for jiffy.js?

    - by Marcel
    Hi, we'd like to measure "client side web site performance". i.e. we would like to have answers to the following questions: how long did it takt to load and render a webpage (including alle graphics scripts etc.) this bundled with additional informations like user-agent, operating system etc. a graphical tool to analyze the data would be perfect I know jiffy ( http://code.google.com/p/jiffy-web/wiki/Jiffy_js ) but that isn't maintained anymore. I would prefer either a hosted solution (like google analytics) or a java based solution that we deploy for ourselves. Do you know something like that? Thanks, Marc

    Read the article

  • JPA entitymanager remove operation is not performant

    - by Samuel
    When I try to do an entityManager.remove(instance) the underlying JPA provider issues a separate delete operation on each of the GroupUser entity. I feel this is not right from a performance perspective, since if a Group has 1000 users there will be 1001 calls issued to delete the entire group and itr groupuser entity. Would it make more sense to write a named query to remove all entries in groupuser table (e.g. delete from group_user where group_id=?), so I would have to make just 2 calls to delete the group. @Entity @Table(name = "tbl_group") public class Group { @OneToMany(mappedBy = "group", cascade = CascadeType.ALL, fetch = FetchType.LAZY) @Cascade(value = DELETE_ORPHAN) private Set<GroupUser> groupUsers = new HashSet<GroupUser>(0);

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • What can be done to speed up synchronous WCF calls?

    - by Dimitri C.
    My performance measurements of synchronous WCF calls from within a Silverlight application showed I can make 7 calls/s on a localhost connection, which is very slow. Can this be speeded up, or is this normal? This is my test code: const UInt32 nrCalls = 100; ICalculator calculator = new CalculatorClient(); // took over from the MSDN calculator example for (double i = 0; i < nrCalls; ++i) { var call = calculator.BeginSubtract(i + 1, 1, null, null); call.AsyncWaitHandle.WaitOne(); double result = calculator.EndSubtract(call); } Remarks: CPU load is almost 0%. Apparently, the WCF module is waiting for something. I tested this both on Firefox 3.6 and Internet Explorer 7. I'm using Silverlight v3.0

    Read the article

  • How to cache and store objects and set an expire policy in android?

    - by virsir
    I have an app fetch data from internet, for better performance and bandwidth, I need to implement a cache layer. There are two different data coming from the internet, one is changing every one hour and another one does not change basically. So for the first type of data, I need to implement an expire policy to make it self deleted after it was created for 1 hour, and when user request that data, I will check the storage first and then goto internet if nothing found. I thought about using a SharedPrefrence or SQLDatabase to store the json data or serialized object string. My question is: 1) What should I use, SharedPrefrence or SQLDatabase or anything else, a piece of data is not big but there are maybe many instances of that data. 2) How to implement that expire system.

    Read the article

  • How to determine cpu, ram needed for rails app?

    - by Ben
    What is the most accurate way to determine the amount of cpu speed and ram needed to run my rails app? I believe there are stress testing tools like Tsung, but how do I determine, for example, that I need X more ram, or X more CPU? I would like to find some way to roughly gauge the performance needs of my application so I can anticipate future needs. I think this data will also be useful for me to decide whether to upgrade one machine, or get another dedicated machine and put all the databases on that one. Essentially, I am concerned about scaling issues, and how to anticipate them. Thanks in advance for the help!

    Read the article

  • Would watching a file for changes or redundantly querying that file be more efficient?

    - by badpanda
    I am wondering whether watching a file/directory for changes using the FileSystemWatcher class is extremely memory intensive. I am developing a desktop application in C# that will be running behind the scenes continuously on low-performance computers, and I need some way of checking to see if various files have changed. I can think of a few solutions: Watch the directories using FileSystemWatcher. Run a timed thread on an interval that goes through and manually checks this. Check manually every time the actionhandler thread runs (the program will occasionally do something, on an action). Any suggestions? Thanks! badPanda

    Read the article

  • How to add a weather info to be evalueated only once???

    - by Savvas Sopiadis
    Hi everybody! In a ASP.MVC (1.0) project i managed to get weather info from a RSS feed and to show it up. The problem i have is performance: i have put a RenderAction() Method in the Site.Master file (which works perfectly) but i 'm worried about how it will behave if a user clicks on menu point 1, after some seconds on menu point 2, after some seconds on menu point 3, .... thus making the RSS feed requesting new info again and again and again! Can this somehow be avoided? (to somehow load this info only once?) Thanks in advance!

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • Should I trust Redis for data integrity?

    - by Jiaji
    In my current project, I have PostgreSQL as my master DB, and Redis as kind of a slave, e.g., when some user adds another as a friend, first the relationship will be stored in PostgreSQL and then a friend list in Redis will be updated. When some user's friend list is requested, it will be pulled out of Redis instead of PostgreSQL. The question is: when I update the friend list in Redis, should I get a fresh copy outof PostgreSQL, and replace the old list in Redis with the new one or should I keep the old list and simply SADD the userid into the list? The latter is of course best for performance, but intuitively the former does a better job in keep the data integrity? And if something like Celery is used, is the second method worth the risk?

    Read the article

  • How to populate an array with recordset data

    - by Curtis Inderwiesche
    I am attempting to move data from a recordset directly into an array. I know this is possible, but specifically I want to do this in VBA as this is being done in MS Access 2003. Typically I would do something like the following to archive this: Dim vaData As Variant Dim rst As ADODB.Recordset ' Pull data into recordset code here... ' Populate the array with the whole recordset. vaData = rst.GetRows What differences exist between VB and VBA which makes this type of operation not work? What about performance concerns? Is this an "expensive" operations?

    Read the article

  • Windows Azure Table Storage LINQ Operators

    - by Ryan Elkins
    Currently Table Storage supports From, Where, Take, and First. Are there plans to support any of the other 29 operators? If we have to code for these ourselves, how much of a performance difference are we looking at to something similar via SQL and SQL Server? Do you see it being somewhat comparable or will it be far far slower if I need to do a Count or Sum or Group By over a gigantic dataset? I like the Azure platform and the idea of cloud based storage. I like Windows Azure for the amount of data it can store and the schema-less nature of table storage. SQL Azure just won't work due to the high cost to storage space.

    Read the article

  • Python threads all executing on a single core

    - by Rob Lourens
    I have a Python program that spawns many threads, runs 4 at a time, and each performs an expensive operation. Pseudocode: for object in list: t = Thread(target=process, args=(object)) # if fewer than 4 threads are currently running, t.start(). Otherwise, add t to queue But when the program is run, Activity Monitor in OS X shows that 1 of the 4 logical cores is at 100% and the others are at nearly 0. Obviously I can't force the OS to do anything but I've never had to pay attention to performance in multi-threaded code like this before so I was wondering if I'm just missing or misunderstanding something. Thanks.

    Read the article

  • MySQL: Storage of multiple text fields for a record

    - by Tom
    An inexperienced question: I need to store about 10 unknown-length text fields per record into a MySQL table. I expect no more than 50K rows in total for this table but speed is important. The database actions will be solely SELECTs for all practical purposes. I'm using InnoDB. In other words: id | text1 | text2 | text3 | .... | text10 As I understand that MySQL will store the text elsewhere and use its own indicators on the table itself, I'm wondering whether there's any fundamental performance implications that I should be worrying about given the way the data is stored? (i.e. several "sub-fetches" from the table). Thank you.

    Read the article

  • Best data-structure to use for two ended sorted list

    - by fmark
    I need a collection data-structure that can do the following: Be sorted Allow me to quickly pop values off the front and back of the list Remain sorted after I insert a new value Allow a user-specified comparison function, as I will be storing tuples and want to sort on a particular value Thread-safety is not required Optionally allow efficient haskey() lookups (I'm happy to maintain a separate hash-table for this though) My thoughts at this stage are that I need a priority queue and a hash table, although I don't know if I can quickly pop values off both ends of a priority queue. I'm interested in performance for a moderate number of items (I would estimate less than 200,000). Another possibility is simply maintaining an OrderedDictionary and doing an insertion sort it every-time I add more data to it. Furthermore, are there any particular implementations in Python. I would really like to avoid writing this code myself.

    Read the article

  • Strange profiling results: definitely non-bottleneck method pops up

    - by jkff
    I'm profiling a program using sampling profiling in YourKit and JProfiler, and also "manually" (I launch it and press Ctrl-Break several times to get thread dumps). All three methods give me extremely strange results: some tens of percents of time spent in a 3-line method that does not even do any allocation or synchronization and doesn't have loops etc. Moreover, after I made this method into a NOP and even removed its invocation completely, the observable program performance didn't change at all (although it got a negligible memory leak, since it was a method for freeing a cheap resource). I'm thinking that this might be because of the constraints that JVM puts on the moments at which a thread's stacktrace may be taken, and it somehow turns out that in my program it is exactly the moments where this method is invoked, although there is absolutely nothing special about it or the context in which it is invoked. What can be the explanation for this phenomenon? What are the aforementioned constraints? What further measurements can I take to clarify the situation?

    Read the article

  • Oracle - timed sampling from v$session_longops

    - by FrustratedWithFormsDesigner
    I am trying to track performance on some procedures that run too slow (and seem to keep getting slower). I am using v$session_longops to track how much work has been done, and I have a query (sofar/((v$session_longops.LAST_UPDATE_TIME-v$session_longops.start_time)*24*60*60)) that tells me the rate at which work is being done. What I'd like to be able to do is capture the rate at which work is being done and how it changes over time. Right now, I just re-execute the query manually, and then copy/paste to Excel. Not very optimal, especially when the phone rings or something else happens to interrupt my sampling frequency. Is there a way to have script in SQL*Plus run a query evern n seconds, spool the results to a file, and then continue doing this until the job ends? (Oracle 10g)

    Read the article

  • What to monitor on MSSQL Server

    - by user361434
    Hi all I have been asked to monitor MSSQL Server (2005 & 2008) and am wondering what are good metrics to look at? I can access WMI counters but am slightly lost as to how much depth is going to be useful. Currently I have on my list: user connections logins per second latch waits per second total latch wait time dead locks per second errors per second Log and data file sizes I am looking to be able to monitor values that will indicate a degradation of performance on the machine or a potential serious issue. To this end I am also wondering at what values some of these things would be considered normal vs problematic? As I reckon it would probably be a really good question to have answered for the general community I thought i'd court some of you DBA experts out there (I am certainly not one of them!) Apologies if a rather open ended question. Ry

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >