Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 222/925 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Is putting the javascript before the closing body tag okay on an asp.net website?

    - by Jason Weber
    I pretty much stated what I have to ask. But is taking all of your external .js files and putting them before the closing body tag on your master pages okay on an asp.net website? I'm just going off of what yslow and google speed have been showing. I can't combine these javascripts, so I'm trying to load them "after page load", but doing so makes them useless; some of my jquery things don't work. I moved my .js files above the opening body tag, and they work. What am I doing wrong? And what could I do to load my .js files after page load? Thanks for any advice anybody can offer!

    Read the article

  • JAVA bytecode optimization

    - by Idob
    This is a basic question. I have code which shouldn't run on metadata beans. All metadata beans are located under metadata package. Now, I use reflection API to find out whether a class is located in the the metadata package. if (newEntity.getClass().getPackage().getName().contains("metadata")) I use this If in several places within this code. The question is: Should I do this once with: boolean isMetadata = false if (newEntity.getClass().getPackage().getName().contains("metadata")) { isMetadata = true; } C++ makes optimizations and knows that this code was already called and it won't call it again. Does JAVA makes optimization? I know reflection API is a beat heavy and I prefer not to lose expensive runtime.

    Read the article

  • C#: How to implement a smart cache

    - by Svish
    I have some places where implementing some sort of cache might be useful. For example in cases of doing resource lookups based on custom strings, finding names of properties using reflection, or to have only one PropertyChangedEventArgs per property name. A simple example of the last one: public static class Cache { private static Dictionary<string, PropertyChangedEventArgs> cache; static Cache() { cache = new Dictionary<string, PropertyChangedEventArgs>(); } public static PropertyChangedEventArgs GetPropertyChangedEventArgsa(string propertyName) { if (cache.ContainsKey(propertyName)) return cache[propertyName]; return cache[propertyName] = new PropertyChangedEventArgs(propertyName); } } But, will this work well? For example if we had a whole load of different propertyNames, that would mean we would end up with a huge cache sitting there never being garbage collected or anything. I'm imagining if what is cached are larger values and if the application is a long-running one, this might end up as kind of a problem... or what do you think? How should a good cache be implemented? Is this one good enough for most purposes? Any examples of some nice cache implementations that are not too hard to understand or way too complex to implement?

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Intersection() and Except() is too slow with large collections of custom objects

    - by Theo
    I am importing data from another database. My process is importing data from a remote DB into a List<DataModel> named remoteData and also importing data from the local DB into a List<DataModel> named localData. I am then using LINQ to create a list of records that are different so that I can update the local DB to match the data pulled from remote DB. Like this: var outdatedData = this.localData.Intersect(this.remoteData, new OutdatedDataComparer()).ToList(); I am then using LINQ to create a list of records that no longer exist in remoteData, but do exist in localData, so that I delete them from local database. Like this: var oldData = this.localData.Except(this.remoteData, new MatchingDataComparer()).ToList(); I am then using LINQ to do the opposite of the above to add the new data to the local database. Like this: var newData = this.remoteData.Except(this.localData, new MatchingDataComparer()).ToList(); Each collection imports about 70k records, and each of the 3 LINQ operation take between 5 - 10 minutes to complete. How can I make this faster? Here is the object the collections are using: internal class DataModel { public string Key1{ get; set; } public string Key2{ get; set; } public string Value1{ get; set; } public string Value2{ get; set; } public byte? Value3{ get; set; } } The comparer used to check for outdated records: class OutdatedDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { var e = string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2) && ( !string.Equals(x.Value1, y.Value1) || !string.Equals(x.Value2, y.Value2) || x.Value3 != y.Value3 ); return e; } public int GetHashCode(DataModel obj) { return 0; } } The comparer used to find old and new records: internal class MatchingDataComparer : IEqualityComparer<DataModel> { public bool Equals(DataModel x, DataModel y) { return string.Equals(x.Key1, y.Key1) && string.Equals(x.Key2, y.Key2); } public int GetHashCode(DataModel obj) { return 0; } }

    Read the article

  • Will an IO blocked process show 100% CPU utilization in 'top' output?

    - by Alex Stoddard
    I have an analysis that can be parallelized over a different number of processes. It is expected that things will be both IO and CPU intensive (very high throughput short-read DNA alignment if anyone is curious.) The system running this is a 48 core linux server. The question is how to determine the optimum number of processes such that total throughput is maximized. At some point the processes will presumably become IO bound such that adding more processes will be of no benefit and possibly detrimental. Can I tell from standard system monitoring tools when that point has been reached? Would the output of top (or maybe a different tool) enable me to distinguish between a IO bound and CPU bound process? I am suspicious that a process blocked on IO might still show 100% CPU utilization.

    Read the article

  • Create a PHP cache system in MySQL database?

    - by Zach Smith
    I'm creating a web service that often scrapes data from remote web pages. After scraping this data, I have a simple multidimensional array of information to use. The scraping process is fairly taxing on my server, and the page load takes a while. I was considering adding a simple cache system using a MySQL database, where I create one row per remote web page with a the array of information pulled from it stored as a JSON encoded string. Is this a good enough system? Or would something like a text file per web page be a better idea?

    Read the article

  • C++ Function pointers vs Switch

    - by Perfix
    What is faster: Function pointers or switch? The switch statement would have around 30 cases, consisting of enumarated unsigned ints from 0 to 30. I could do the following: class myType { FunctionEnum func; string argv[123]; int someOtherValue; }; // In another file: myType current; // Iterate through a vector containing lots of myTypes // ... for ( i=0; i < myVecSize; i ++ ) switch ( current.func ) { case 1: //... break; // ........ case 30: // blah break; } And go trough the switch with func every time. The good thing about switch would also be that my code is more organized than with 30 functions. Or I could do that (not so sure with that): class myType { myReturnType (*func); string argv[123]; int someOtherValue; }; I'd have 30 different functions then, at the beginning a pointer to one of them is assigned to myType. What is probably faster: Switch statement or function pointer? Calls per second: Around 10 million. I can't just test it out - that would require me to rewrite the whole thing. Currently using switch. I'm building an interpreter which I want to be faster than Python & Ruby - every clock cycle matters!

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • Wpf. Chart optimization. More than million points

    - by Evgeny
    I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference. I have link to component which have functionallity exactly what i need (2 million points demo): http://www.mindscape.co.nz/demo/SilverlightElements/demopage.html#/ChartOverviewPage I will be grateful for any matherials, links or thoughts how to realize such functionallity.

    Read the article

  • What influences running time of reading a bunch of images?

    - by remi
    I have a program where I read a handful of tiny images (50000 images of size 32x32). I read them using OpenCV imread function, in a program like this: std::vector<std::string> imageList; // is initialized with full path to the 50K images for(string s : imageList) { cv::Mat m = cv::imread(s); } Sometimes, it will read the images in a few seconds. Sometimes, it takes a few minutes to do so. I run this program in GDB, with a breakpoint further away than the loop for reading images so it's not because I'm stuck in a breakpoint. The same "erratic" behaviour happens when I run the program out of GDB. The same "erratic" behaviour happens with program compiled with/without optimisation The same "erratic" behaviour happens while I have or not other programs running in background The images are always at the same place in the hard drive of my machine. I run the program on a Linux Suse distrib, compiled with gcc. So I am wondering what could affect the time of reading the images that much?

    Read the article

  • What is microbenchmarking?

    - by polygenelubricants
    I've heard this term used, but I'm not entirely sure what it means, so: What DOES it mean and what DOESN'T it mean? What are some examples of what IS and ISN'T microbenchmarking? What are the dangers of microbenchmarking and how do you avoid it? (or is it a good thing?)

    Read the article

  • can a program written in C be faster than one written in OCaml and translated to C?

    - by Ole Jak
    So I have some cool Image Processing algorithm. I have written it in OCaml. It performs well. I now I can compile it as C code with such command ocamlc -output-obj -o foo.c foo.ml (I have a situation where I am not alowed to use OCaml compiler to bild my programm for my arcetecture, I can use only specialy modified gcc. so I will compile that programm with sometyhing like gcc -L/usr/lib/ocaml foo.c -lcamlrun -lm -lncurses and Itll run on my archetecture.) I want to know in general case can a program written in C be faster than one written in OCaml and translated to C?

    Read the article

  • Sql Server 2000 Stored Procedure Prevents Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • JavaScript: Is there a better way to retain your array but efficiently concat or replace items?

    - by Michael Mikowski
    I am looking for the best way to replace or add to elements of an array without deleting the original reference. Here is the set up: var a = [], b = [], c, i, obj; for ( i = 0; i < 100000; i++ ) { a[ i ] = i; b[ i ] = 10000 - i; } obj.data_list = a; Now we want to concatenate b INTO a without changing the reference to a, since it is used in obj.data_list. Here is one method: for ( i = 0; i < b.length; i++ ) { a.push( b[ i ] ); } This seems to be a somewhat terser and 8x (on V8) faster method: a.splice.apply( a, [ a.length, 0 ].concat( b ) ); I have found this useful when iterating over an "in-place" array and don't want to touch the elements as I go (a good practice). I start a new array (let's call it keep_list) with the initial arguments and then add the elements I wish to retain. Finally I use this apply method to quickly replace the truncated array: var keep_list = [ 0, 0 ]; for ( i = 0; i < a.length; i++ ){ if ( some_condition ){ keep_list.push( a[ i ] ); } // truncate array a.length = 0; // And replace contents a.splice.apply( a, keep_list ); There are a few problems with this solution: there is a max call stack size limit of around 50k on V8 I have not tested on other JS engines yet. This solution is a bit cryptic Has anyone found a better way?

    Read the article

  • Running Boggle Solver takes over an hour to run. What is wrong with my code?

    - by user1872912
    So I am running a Boggle Solver in java on the NetBeans IDE. When I run it, i have to quit after 10 minutes or so because it will end up taking about 2 hour to run completely. Is there something wrong with my code or a way that will make is substantially faster? public void findWords(String word, int iLoc, int jLoc, ArrayList<JLabel> labelsUsed){ if(iLoc < 0 || iLoc >= 4 || jLoc < 0 || jLoc >= 4){ return; } if(labelsUsed.contains(jLabels[iLoc][jLoc])){ return; } word += jLabels[iLoc][jLoc].getText(); labelsUsed.add(jLabels[iLoc][jLoc]); if(word.length() >= 3 && wordsPossible.contains(word)){ wordsMade.add(word); } findWords(word, iLoc-1, jLoc, labelsUsed); findWords(word, iLoc+1, jLoc, labelsUsed); findWords(word, iLoc, jLoc-1, labelsUsed); findWords(word, iLoc, jLoc+1, labelsUsed); findWords(word, iLoc-1, jLoc+1, labelsUsed); findWords(word, iLoc-1, jLoc-1, labelsUsed); findWords(word, iLoc+1, jLoc-1, labelsUsed); findWords(word, iLoc+1, jLoc+1, labelsUsed); labelsUsed.remove(jLabels[iLoc][jLoc]); } here is where I call this method from: public void findWords(){ ArrayList <JLabel> labelsUsed = new ArrayList<JLabel>(); for(int i=0; i<jLabels.length; i++){ for(int j=0; j<jLabels[i].length; j++){ findWords(jLabels[i][j].getText(), i, j, labelsUsed); //System.out.println("Done"); } } } edit: BTW I am using a GUI and the letters on the board are displayed by using a JLabel.

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • Json.Net CustomSerialization

    - by BonyT
    I am serializing a collection of objects that contains a dictionary called dynamic properties. The default Json emitted looks like this: [{"dynamicProperties":{"WatchId":7771,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5580"}}, {"dynamicProperties":{"WatchId":7769,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5574"}}, {"dynamicProperties":{"WatchId":7767,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5572"}}, {"dynamicProperties":{"WatchId":7765,"Issues":0,"WatchType":"y","Location":"Equinix Source","Name":"highlight_SM"}}, {"dynamicProperties":{"WatchId":8432,"Issues":0,"WatchType":"y","Location":"Test Devices","Name":"Cisco1700PI"}}] I'd like to produce Json that looks like this: [{"WatchId":7771,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5580"}, {"WatchId":7769,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5574"}, {"WatchId":7767,"Issues":0,"WatchType":"x","Location":"Equinix Source","Name":"PI_5570_5572"}, {"WatchId":7765,"Issues":0,"WatchType":"y","Location":"Equinix Source","Name":"highlight_SM"}, {"WatchId":8432,"Issues":0,"WatchType":"y","Location":"Test Devices","Name":"Cisco1700PI"}] From reading the Json.Net documentation it looks like I could build a CustomContractResolver for my class, but I cannot find any details on how to go about this... Can anyone shed any light on the direction I should be looking in?

    Read the article

  • how do i do very fast inserts to SQL Server 2008

    - by CharlesO
    I have a project that involves recording data from a device directly into a sql table. I do very little processing in code before writing to sql server (2008 express by the way) typically i use the sqlhelper class's ExecuteNoneQuery method and pass in a stored proc name and list of parameters that the SP expects. This is very convenient, but i need a much faster way of doing this. Thanks.

    Read the article

  • Hibernate Relationship Mapping/Speed up batch inserts

    - by manyxcxi
    I have 5 MySQL InnoDB tables: Test,InputInvoice,InputLine,OutputInvoice,OutputLine and each is mapped and functioning in Hibernate. I have played with using StatelessSession/Session, and JDBC batch size. I have removed any generator classes to let MySQL handle the id generation- but it is still performing quite slow. Each of those tables is represented in a java class, and mapped in hibernate accordingly. Currently when it comes time to write the data out, I loop through the objects and do a session.save(Object) or session.insert(Object) if I'm using StatelessSession. I also do a flush and clear (when using Session) when my line count reaches the max jdbc batch size (50). Would it be faster if I had these in a 'parent' class that held the objects and did a session.save(master) instead of each one? If I had them in a master/container class, how would I map that in hibernate to reflect the relationship? The container class wouldn't actually be a table of it's own, but a relationship all based on two indexes run_id (int) and line (int). Another direction would be: How do I get Hibernate to do a multi-row insert?

    Read the article

  • PHP : flatten array - fastest way?

    - by Industrial
    Is there any fast way to flatten an array and select subkeys ('key'&'value' in this case) without running a foreach loop, or is the foreach always the fastest way? Array ( [0] => Array ( [key] => string [value] => a simple string [cas] => 0 ) [1] => Array ( [key] => int [value] => 99 [cas] => 0 ) [2] => Array ( [key] => array [value] => Array ( [0] => 11 [1] => 12 ) [cas] => 0 ) ) To: Array ( [int] => 99 [string] => a simple string [array] => Array ( [0] => 11 [1] => 12 ) )

    Read the article

  • sql query is too slow, how to improve speed

    - by user1289282
    I have run into a bottleneck when trying to update one of my tables. The player table has, among other things, id, skill, school, weight. What I am trying to do is: SELECT id, skill FROM player WHERE player.school = (current school of 4500) AND player.weight = (current weight of 14) to find the highest skill of all players returned from the query UPDATE player SET starter = 'TRUE' WHERE id = (highest skill) move to next weight and repeat when all weights have been completed move to next school and start over all schools completed, done I have this code implemented and it works, but I have approximately 4500 schools totaling 172000 players and the way I have it now, it would take probably a half hour or more to complete (did not wait it out), which is way too slow. How to speed this up? Short of reducing the scale of the system, I am willing to do anything that gets the intended result. Thanks! *the weights are the standard folk style wrestling weights ie, 103, 113, 120, 126, 132, 138, 145, 152, 160, 170, 182, 195, 220, 285 pounds

    Read the article

  • MySQL Insert Query Randomly Takes a Long Time

    - by ShimmerTroll
    I am using MySQL to manage session data for my PHP application. When testing the app, it is usually very quick and responsive. However, seemingly randomly the response will stall before finally completing after a few seconds. I have narrowed the problem down to the session write query which looks something like this: INSERT INTO Session VALUES('lvg0p9peb1vd55tue9nvh460a7', '1275704013', '') ON DUPLICATE KEY UPDATE sessAccess='1275704013',sessData=''; The slow query log has this information: Query_time: 0.524446 Lock_time: 0.000046 Rows_sent: 0 Rows_examined: 0 This happens about 1 out of every 10 times. The query usually only takes ~0.0044 sec. The table is InnoDB with about 60 rows. sessId is the primary key with a BTREE index. Since this is accessed on every page view, it is clearly not an acceptable execution time. Why is this happening? Update: Table schema is: sessId:varchar(32), sessAccess:int(10), sessData:text

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >