Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 138/537 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Coding guidelines + Best Practises?

    - by Chathuranga Chandrasekara
    I couldn't find any question that directly applies to my query so I am posting this as a new question. If there is any existing discussion that may help me, please point it out and close the question. Question: I am going to do a presentation on C# coding guidelines but it is not supposed to limit to coding standards. So I have a rough idea but I think I need to address good programing practices. So the contents will be something like this. Basic coding standards - Casing, Formatting etc. Good practices - Usage of Hashset over other data structures, String vs String Builder, String's immutability and using them effectively etc Really I would like to add more good practices (Especially to improve the performance.) So like to hear some more good practices to be used with C#. Any suggestions??? (No need of large descriptions :) Just the idea is sufficient.)

    Read the article

  • Java 'Prototype' pattern - new vs clone vs class.newInstance

    - by Guillaume
    In my project there are some 'Prototype' factories that create instances by cloning a final private instance. The author of those factories says that this pattern provides better performance than calling 'new' operator. Using google to get some clues about that, I've found nothing really relevant about that. Here is a small excerpt found in a javdoc from an unknown project javdoc from an unknown project Sadly, clone() is rather slower than calling new. However it is a lot faster than calling java.lang.Class.newInstance(), and somewhat faster than rolling our own "cloner" method. For me it's looking like an old best practice of the java 1.1 time. Does someone know more about this ? Is this a good practice to use that with 'modern' jvm ?

    Read the article

  • optimal memory layout for read-only/write memory segments.

    - by aaa
    hello. Suppose I have two memory segments (equal size each, approximately 1kb in size) , one is read-only (after initialization), and other is read/write. what is the best layout in memory for such segments in terms of memory performance? one allocation, contiguous segments or two allocations (in general not contiguous). my primary architecture is linux Intel 64-bit. my feeling is former (cache friendlier) case is better. is there circumstances, where second layout is preferred? Thanks

    Read the article

  • Web Page execution

    - by Sweta Jha
    I have a web page which brings 13K+ records in 20 seconds. There is a menu on the page, clicking on which navigates me to another page which is very lightweight. Displaying the data (13K+) took only 20 seconds whereas navigating from that page took much longer, more than 2 mins. Can you tell me why is the latter taking so much of time. I've stopped the page_load code execution on click of the menu. I've disabled the viewstate for that page as well.

    Read the article

  • Neural Network Always Produces Same/Similar Outputs for Any Input

    - by l33tnerd
    I have a problem where I am trying to create a neural network for Tic-Tac-Toe. However, for some reason, training the neural network causes it to produce nearly the same output for any given input. I did take a look at Artificial neural networks benchmark, but my network implementation is built for neurons with the same activation function for each neuron, i.e. no constant neurons. To make sure the problem wasn't just due to my choice of training set (1218 board states and moves generated by a genetic algorithm), I tried to train the network to reproduce XOR. The logistic activation function was used. Instead of using the derivative, I multiplied the error by output*(1-output) as some sources suggested that this was equivalent to using the derivative. I can put the Haskell source on HPaste, but it's a little embarrassing to look at. The network has 3 layers: the first layer has 2 inputs and 4 outputs, the second has 4 inputs and 1 output, and the third has 1 output. Increasing to 4 neurons in the second layer didn't help, and neither did increasing to 8 outputs in the first layer. I then calculated errors, network output, bias updates, and the weight updates by hand based on http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf to make sure there wasn't an error in those parts of the code (there wasn't, but I will probably do it again just to make sure). Because I am using batch training, I did not multiply by x in equation (4) there. I am adding the weight change, though http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-2.html suggests to subtract it instead. The problem persisted, even in this simplified network. For example, these are the results after 500 epochs of batch training and of incremental training. Input |Target|Output (Batch) |Output(Incremental) [1.0,1.0]|[0.0] |[0.5003781562785173]|[0.5009731800870864] [1.0,0.0]|[1.0] |[0.5003740346965251]|[0.5006347214672715] [0.0,1.0]|[1.0] |[0.5003734471544522]|[0.500589332376345] [0.0,0.0]|[0.0] |[0.5003674110937019]|[0.500095157458231] Subtracting instead of adding produces the same problem, except everything is 0.99 something instead of 0.50 something. 5000 epochs produces the same result, except the batch-trained network returns exactly 0.5 for each case. (Heck, even 10,000 epochs didn't work for batch training.) Is there anything in general that could produce this behavior? Also, I looked at the intermediate errors for incremental training, and the although the inputs of the hidden/input layers varied, the error for the output neuron was always +/-0.12. For batch training, the errors were increasing, but extremely slowly and the errors were all extremely small (x10^-7). Different initial random weights and biases made no difference, either. Note that this is a school project, so hints/guides would be more helpful. Although reinventing the wheel and making my own network (in a language I don't know well!) was a horrible idea, I felt it would be more appropriate for a school project (so I know what's going on...in theory, at least. There doesn't seem to be a computer science teacher at my school). EDIT: Two layers, an input layer of 2 inputs to 8 outputs, and an output layer of 8 inputs to 1 output, produces much the same results: 0.5+/-0.2 (or so) for each training case. I'm also playing around with pyBrain, seeing if any network structure there will work. Edit 2: I am using a learning rate of 0.1. Sorry for forgetting about that. Edit 3: Pybrain's "trainUntilConvergence" doesn't get me a fully trained network, either, but 20000 epochs does, with 16 neurons in the hidden layer. 10000 epochs and 4 neurons, not so much, but close. So, in Haskell, with the input layer having 2 inputs & 2 outputs, hidden layer with 2 inputs and 8 outputs, and output layer with 8 inputs and 1 output...I get the same problem with 10000 epochs. And with 20000 epochs. Edit 4: I ran the network by hand again based on the MIT PDF above, and the values match, so the code should be correct unless I am misunderstanding those equations. Some of my source code is at http://hpaste.org/42453/neural_network__not_working; I'm working on cleaning my code somewhat and putting it in a Github (rather than a private Bitbucket) repository. All of the relevant source code is now at https://github.com/l33tnerd/hsann.

    Read the article

  • How do I get Java to use my multi-core processor?

    - by Rudiger
    I'm using a GZIPInputStream in my program, and I know that the performance would be helped if I could get Java running my program in parallel. In general, is there a command-line option for the standard VM to run on many cores? It's running on just one as it is. Thanks! Edit I'm running plain ol' Java SE 6 update 17 on Windows XP. Would putting the GZIPInputStream on a separate thread explicitly help? No! Do not put the GZIPInputStream on a separate thread! Do NOT multithread I/O! Edit 2 I suppose I/O is the bottleneck, as I'm reading and writing to the same disk... In general, though, is there a way to make GZIPInputStream faster? Or a replacement for GZIPInputStream that runs parallel? Edit 3 Code snippet I used: GZIPInputStream gzip = new GZIPInputStream(new FileInputStream(INPUT_FILENAME)); DataInputStream in = new DataInputStream(new BufferedInputStream(gzip));

    Read the article

  • Best solution for reporting database

    - by zzyzx
    Here is the situation: There is a transaction intensive database - used for both routine transactions and reports. I was wondering if I could isolate these two operations and 2 independent databases, so reports could run off of one database and all the transactions could occur in another one. This would improve performance for the OLTP SQL database. I have gone over a few options like, Mirroring, Log shipping, Replication, Snapshots, Clustering - but would like to discuss the best possible strategy for the desired result. Please advise the best solution to implement this strategy, or any other thoughts/suggestion you may have.

    Read the article

  • Smartest way to draw 100+ images on screen

    - by Henrik
    Hello all, First of all I'm a newbie when it comes to Android programming. So if this question is totally stupid please delete it ASAP :-). Question: I'm going to draw a grid of 10x10 PNG images. Each image is 32x32 px. All of the images are unique. I'm thinking that the easiest way seems to be to put each image in an ImageView. If adding all ImageView's to the layout would this give me some kind of performance hit? Would there be any smarter way to draw these images? Thanks. / Henrik

    Read the article

  • Simple Python Challenge: Fastest Bitwise XOR on Data Buffers

    - by user213060
    Challenge: Perform a bitwise XOR on two equal sized buffers. The buffers will be required to be the python str type since this is traditionally the type for data buffers in python. Return the resultant value as a str. Do this as fast as possible. The inputs are two 1 megabyte (2**20 byte) strings. The challenge is to substantially beat my inefficient algorithm using python or existing third party python modules (relaxed rules: or create your own module.) Marginal increases are useless. from os import urandom from numpy import frombuffer,bitwise_xor,byte def slow_xor(aa,bb): a=frombuffer(aa,dtype=byte) b=frombuffer(bb,dtype=byte) c=bitwise_xor(a,b) r=c.tostring() return r aa=urandom(2**20) bb=urandom(2**20) def test_it(): for x in xrange(1000): slow_xor(aa,bb)

    Read the article

  • How to efficiently show many Images? (iPhone programming)

    - by Thomas
    In my application I needed something like a particle system so I did the following: While the application initializes I load a UIImage laserImage = [UIImage imageNamed:@"laser.png"]; UIImage *laserImage is declared in the Interface of my Controller. Now every time I need a new particle this code makes one: // add new Laserimage UIImageView *newLaser = [[UIImageView alloc] initWithImage:laserImage]; [newLaser setTag:[model.lasers count]-9]; [newLaser setBounds:CGRectMake(0, 0, 17, 1)]; [newLaser setOpaque:YES]; [self.view addSubview:newLaser]; [newLaser release]; Please notice that the images are only 17px * 1px small and model.lasers is a internal array to do all the calculating seperated from graphical output. So in my main drawing loop I set all the UIImageView's positions to the calculated positions in my model.lasers array: for (int i = 0; i < [model.lasers count]; i++) { [[self.view viewWithTag:i+10] setCenter:[[model.lasers objectAtIndex:i] pos]]; } I incremented the tags by 10 because the default is 0 and I don't want to move all the views with the default tag. So the animation looks fine with about 10 - 20 images but really gets slow when working with about 60 images. So my question is: Is there any way to optimize this without starting over in OpenGl ES? Thank you very much and sorry for my english! Greetings from Germany, Thomas

    Read the article

  • Open source / commercial alternative for jiffy.js?

    - by Marcel
    Hi, we'd like to measure "client side web site performance". i.e. we would like to have answers to the following questions: how long did it takt to load and render a webpage (including alle graphics scripts etc.) this bundled with additional informations like user-agent, operating system etc. a graphical tool to analyze the data would be perfect I know jiffy ( http://code.google.com/p/jiffy-web/wiki/Jiffy_js ) but that isn't maintained anymore. I would prefer either a hosted solution (like google analytics) or a java based solution that we deploy for ourselves. Do you know something like that? Thanks, Marc

    Read the article

  • JPA entitymanager remove operation is not performant

    - by Samuel
    When I try to do an entityManager.remove(instance) the underlying JPA provider issues a separate delete operation on each of the GroupUser entity. I feel this is not right from a performance perspective, since if a Group has 1000 users there will be 1001 calls issued to delete the entire group and itr groupuser entity. Would it make more sense to write a named query to remove all entries in groupuser table (e.g. delete from group_user where group_id=?), so I would have to make just 2 calls to delete the group. @Entity @Table(name = "tbl_group") public class Group { @OneToMany(mappedBy = "group", cascade = CascadeType.ALL, fetch = FetchType.LAZY) @Cascade(value = DELETE_ORPHAN) private Set<GroupUser> groupUsers = new HashSet<GroupUser>(0);

    Read the article

  • How do you handle browser cache with login/logout?

    - by Julien
    To improve performances, I'd like to add a fairly long Cache-Control (up to 30 minutes) to each page since they do not change often. However, each page also displays the name of the user logged in (like this website). The problem is when the user logs in or logs out: the user name must change. How can I change the user name after each login/logout action while keeping a long Cache-Control? Here are the solutions I can think of: Ajax request (not cached) to retrieve and display the user name. If I have 2 requests (/user?registered and /user?new), they could be cached as well. But I am afraid this extra request would nullify my caching performance-wise Add a unique URL variable (?time=) to make the URL different, and cancel the cache. However, I would have to add this variable to all links on my webpage, not very convenient code-wise This problems becomes greater if I actually have more content that is not the same for registered users and new users.

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • What can be done to speed up synchronous WCF calls?

    - by Dimitri C.
    My performance measurements of synchronous WCF calls from within a Silverlight application showed I can make 7 calls/s on a localhost connection, which is very slow. Can this be speeded up, or is this normal? This is my test code: const UInt32 nrCalls = 100; ICalculator calculator = new CalculatorClient(); // took over from the MSDN calculator example for (double i = 0; i < nrCalls; ++i) { var call = calculator.BeginSubtract(i + 1, 1, null, null); call.AsyncWaitHandle.WaitOne(); double result = calculator.EndSubtract(call); } Remarks: CPU load is almost 0%. Apparently, the WCF module is waiting for something. I tested this both on Firefox 3.6 and Internet Explorer 7. I'm using Silverlight v3.0

    Read the article

  • Same query, different execution plans

    - by A..
    Hi, I am trying to find a solution for a problem that is driving me mad... I have a query which runs very fast in a QA Server but it is very slow in production. I realised that they have different execution plans... so I have try recompiling, cleanning the cache for the execution plans, update statistics, check the type of collation... but I still can't find what's going on... The databases where the query is running are exactly the same and the SQL Servers have also the same configuration. Any new ideas would be much appreciated. Thanks, A.

    Read the article

  • Oracle EXECUTE IMMEDIATE changes explain plan of query.

    - by Gunny
    I have a stored procedure that I am calling using EXECUTE IMMEDIATE. The issue that I am facing is that the explain plan is different when I call the procedure directly vs when I use EXECUTE IMMEDIATE to call the procedure. This is causing the execution time to increase 5x. The main difference between the plans is that when I use execute immediate the optimizer isn't unnesting the subquery (I'm using a NOT EXISTS condition). We are using Rule Based Optimizer here at work. Example: Fast: begin package.procedure; end; / Slow: begin execute immediate 'begin package.' || proc_name || '; end;'; end; /

    Read the article

  • How to determine cpu, ram needed for rails app?

    - by Ben
    What is the most accurate way to determine the amount of cpu speed and ram needed to run my rails app? I believe there are stress testing tools like Tsung, but how do I determine, for example, that I need X more ram, or X more CPU? I would like to find some way to roughly gauge the performance needs of my application so I can anticipate future needs. I think this data will also be useful for me to decide whether to upgrade one machine, or get another dedicated machine and put all the databases on that one. Essentially, I am concerned about scaling issues, and how to anticipate them. Thanks in advance for the help!

    Read the article

  • Browser caching issue on a https site pressing f5

    - by sushil bharwani
    i am working on a website where i have content entry form. This form contains a tiny mce control. The control is composed of some 40-50 files. The testing reported that the entry form loads slow and evertime shows up 50 files loading to completely load the page. Is there a way i can decrease this time. I have taken help of browser caching by setting the expires header of static content to very far date. When i access the form through its link second or later times it loads fast without saying 40 files remaining. but when i do f5 it reloads the entire page. I m confused as to how is f5 different from clicking on the link. Just to add my url is https.Any suggestion to increase the performance of this form will be great.

    Read the article

  • How to cache and store objects and set an expire policy in android?

    - by virsir
    I have an app fetch data from internet, for better performance and bandwidth, I need to implement a cache layer. There are two different data coming from the internet, one is changing every one hour and another one does not change basically. So for the first type of data, I need to implement an expire policy to make it self deleted after it was created for 1 hour, and when user request that data, I will check the storage first and then goto internet if nothing found. I thought about using a SharedPrefrence or SQLDatabase to store the json data or serialized object string. My question is: 1) What should I use, SharedPrefrence or SQLDatabase or anything else, a piece of data is not big but there are maybe many instances of that data. 2) How to implement that expire system.

    Read the article

  • Page zoom slows down page rendering

    - by Alex
    I'm building a map viewer much like Google maps and i've run into an interesting performance problem when a page is zoomed (i.e ctrl + OR ctrl -). It seems to affect all major browsers but Firefox has the worst problems as far as I can tell. The problem is that when the page is zoomed panning by dragging the mouse seems really sluggish. This can even be seen on Google maps. Pan the map left and right and note how smooth it is. Now press ctrl+ (3 or 4 times). Now pan the map left and right in the same way. Notice the difference? Does anyone know how I can minimize this problem?

    Read the article

  • Would watching a file for changes or redundantly querying that file be more efficient?

    - by badpanda
    I am wondering whether watching a file/directory for changes using the FileSystemWatcher class is extremely memory intensive. I am developing a desktop application in C# that will be running behind the scenes continuously on low-performance computers, and I need some way of checking to see if various files have changed. I can think of a few solutions: Watch the directories using FileSystemWatcher. Run a timed thread on an interval that goes through and manually checks this. Check manually every time the actionhandler thread runs (the program will occasionally do something, on an action). Any suggestions? Thanks! badPanda

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • Should I trust Redis for data integrity?

    - by Jiaji
    In my current project, I have PostgreSQL as my master DB, and Redis as kind of a slave, e.g., when some user adds another as a friend, first the relationship will be stored in PostgreSQL and then a friend list in Redis will be updated. When some user's friend list is requested, it will be pulled out of Redis instead of PostgreSQL. The question is: when I update the friend list in Redis, should I get a fresh copy outof PostgreSQL, and replace the old list in Redis with the new one or should I keep the old list and simply SADD the userid into the list? The latter is of course best for performance, but intuitively the former does a better job in keep the data integrity? And if something like Celery is used, is the second method worth the risk?

    Read the article

  • PL/SQL profiler missing data

    - by user289429
    We are using pl/sql profiler to collect metrics. We noticed that on one of the environment the plsql_profiler_runs table is populated with the total execution time but the finer details that gets collected in the table plsql_profiler_data is missing. Any idea why this would be happening? We do use dbms_profiler.flush_data() before stopping the profiler and have seen this work fine in another environment.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >