Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 165/572 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Node & Redis: Crucial Design Issues in Production Mode

    - by Ali
    This question is a hybrid one, being both technical and system design related. I'm developing the backend of an application that will handle approx. 4K request per second. We are using Node.js being super fast and in terms of our database struction we are using MongoDB, with Redis being a layer between Node and MongoDB handling volatile operations. I'm quite stressed because we are expecting concurrent requests that we need to handle carefully and we are quite close to launch. However I do not believe I've applied the correct approach on redis. I have a class Student, and they constantly change stages(such as 'active', 'doing homework','in lesson' etc. Thus I created a Redis DB for each state. (1 for being 'active', 2 for being 'doing homework'). Above I have the structure of the 'active' students table; xa5p - JSON stringified object #1 pQrW - JSON stringified object #2 active_student_table - {{studentId:'xa5p'}, {studentId:'pQrW'}} Since there is no 'select all keys method' in Redis, I've been suggested to use a set such that when I run command 'smembers' I receive the keys and later on do 'get' for each id in order to find a specific user (lets say that age older than 15). I've been also suggested that in fact I used never use keys in production mode. My question is, no matter how 'conceptual' it is, what specific things I should avoid doing in Node & Redis in production stage?. Are they any issues related to my design? Students must be objects and I sure can list them in a list but I haven't done yet. Is it that crucial in production stage?

    Read the article

  • Know of any Java garbage collection log analysis tools?

    - by braveterry
    I'm looking for a tool or a script that will take the console log from my web app, parse out the garbage collection information and display it in a meaningful way. I'm starting up on a Sun Java 1.4.2 JVM with the following flags: -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails The log output looks like this: 54.736: [Full GC 54.737: [Tenured: 172798K->18092K(174784K), 2.3792658 secs] 257598K->18092K(259584K), [Perm : 20476K->20476K(20480K)], 2.4715398 secs] Making sense of a few hundred of these kinds of log entries would be much easier if I had a tool that would visually graph garbage collection trends.

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • MSSQL Server high CPU and I/O activity database tuning

    - by zapping
    Our application tends to be running very slow recently. On debugging and tracing found out that the process is showing high cpu cycles and SQL Server shows high I/O activity. Can you please guide as to how it can be optimised? The application is now about an year old and the database file sizes are not very big or anything. The database is set to auto shrink. Its running on win2003, SQL Server 2005 and the application is a web application coded in c# i.e vs2005

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • Which memory related Tomcat JVM startup parameters are worth tuning?

    - by knorv
    I'm trying to understand the fine art of tuning Tomcat memory settings. In this quest I have the following three questions: Which memory related JVM startup parameters are worth setting when running Tomcat? Why? What are useful rule-of-thumbs when fine-tuning the memory settings for a Tomcat installation? How do you monitor the memory consumption of your live Tomcat installation?

    Read the article

  • How to not over-use jQuery?

    - by Fedyashev Nikita
    Typical jQuery over-use: $('button').click(function() { alert('Button clicked: ' + $(this).attr('id')); }); Which can be simplified to: $('button').click(function() { alert('Button clicked: ' + this.id); }); Which is way faster. Can you give me any more examples of similar jQuery over-use?

    Read the article

  • SQL Server query problem

    - by user335160
    I want to achieved the results shown in the attached image. The Table Structure and Data are the ffg below: Table Relationship Overall IB Limit->one to many-> Facility Limit Facility Limit->one to many-> Facility Sub Limit Tables Structure and Data Overall IB Limit Id SCAF Reference Approval Date 1 NEW-001 January 1, 2011 2 NEW-002 January 2, 2011 3 NEW-003 January 3, 2011 ---------------------------- Facility Limit Id OverallIBLimitId Product Type 1 1 RPA 2 1 CG 3 2 RPA 4 3 CG ---------------------------- Facility Sub Limit Id FacilityLimitId Sub-Limit Type Amount Tenor Status Status Date 1 1 RPA at max 2,000,0000.00 2 months Approved January 5, 2011 2 1 Oil 3,000,0000.00 3 yrs Approved January 5, 2011 3 2 CG at minor 4,000,0000.00 1 yr Approved January 5, 2011 4 2 CG at max 5,000,0000.00 6 months Approved January 5, 2011 5 2 Flood Component 1 5,000,0000.00 6 months Approved January 5, 2011 6 2 Flood Component 2 6,000,0000.00 3 yrs Approved January 5, 2011 7 3 RPA at minor 1,000,0000.00 6 months Approved January 5, 2011 8 4 One-Off 1,000,0000.00 6 months Approved January 5, 2011

    Read the article

  • Prepending to a multi-gigabyte file.

    - by dafmetal
    What would be the most performant way to prepend a single character to a multi-gigabyte file (in my practical case, a 40GB file). There is no limitation on the implementation to do this. Meaning it can be through a tool, a shell script, a program in any programming language, ...

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • How can I monitor the rendering time in a browser?

    - by adpd
    I work on an internal corporate system that has a web front-end as one of its interfaces. The web front-end is served up using Tomcat. How can I monitor the rendering time of specific pages in a browser (IE6)? I would like to be able to record the results in a log file (separate log file or the Tomcat access log).

    Read the article

  • Why are compilers so stupid?

    - by martinus
    I always wonder why compilers can't figure out simple things that are obvious to the human eye. They do lots of simple optimizations, but never something even a little bit complex. For example, this code takes about 6 seconds on my computer to print the value zero (using java 1.6): int x = 0; for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) { x += x + x + x + x + x; } System.out.println(x); It is totally obvious that x is never changed so no matter how often you add 0 to itself it stays zero. So the compiler could in theory replace this with System.out.println(0). Or even better, this takes 23 seconds: public int slow() { String s = "x"; for (int i = 0; i < 100000; ++i) { s += "x"; } return 10; } First the compiler could notice that I am actually creating a string s of 100000 "x" so it could automatically use s StringBuilder instead, or even better directly replace it with the resulting string as it is always the same. Second, It does not recognize that I do not actually use the string at all, so the whole loop could be discarded! Why, after so much manpower is going into fast compilers, are they still so relatively dumb? EDIT: Of course these are stupid examples that should never be used anywhere. But whenever I have to rewrite a beautiful and very readable code into something unreadable so that the compiler is happy and produces fast code, I wonder why compilers or some other automated tool can't do this work for me.

    Read the article

  • How to get a count of ManagementObjects (WMI results) without enumerating through the collection in

    - by Mark
    When querying for large ammount of data through WMI (say the windows events log Win32_NTLogEvent) it is very useful to know what kind of numbers you are getting yourself into before downloading all the content. Is there a way two do this? From what i know there is no "Select Count(*) FROM Win32_NTLogEvent" in WQL. From what i know the Count property of the ManagementObjectCollection actually enumerates through all the results whether you have the Rewindable property set to true or false. If it cannot be done in .NET, can it be done by directly using the underlying IWbem objects Thanks

    Read the article

  • Django: Update order attribute for objects in a queryset

    - by lazerscience
    I'm having a attribute on my model to allow the user to order the objects. I have to update the element's order depending on a list, that contains the object's ids in the new order; right now I'm iterating over the whole queryset and set one objects after the other. What would be the easiest/fastest way to do the same with the whole queryset? def update_ordering(model, order): """ order is in the form [id,id,id,id] for example: [8,4,5,1,3] """ id_to_order = dict((order[i], i) for i in range(len(order))) for x in model.objects.all(): x.order = id_to_order[x.id] x.save()

    Read the article

  • integer division in php

    - by oezi
    hi guys, i'm looking for the fastest way to do an integer division in php. for example, 5 / 2 schould be 4 | 6 / 2 should be 3 and so on. if i simply do this, php will return 2.5 in the first case, the only solution i could find was using intval($my_number/2) - wich isn't as fast as i want it to be (but gives the expected results). can anyone help me out with this?

    Read the article

  • Why does SQLite take such a long time to fetch the data?

    - by Derk
    I have two possible queries, both giving the result set I want. Query one takes about 30ms, but 150ms to fetch the data from the database. SELECT id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) AND EXISTS ( SELECT 1 FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) Query two takes about 3ms -this is the one I therefore prefer-, but also takes 170ms to fetch the data. SELECT ( SELECT prod2.value FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) as id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) The 170ms seems to be related to the number of rows from table featval3. After an index is used on featval3.feature IN (?,?,?,?), 151 items "remain" in featval3. Is there something obvious I am missing regarding the slow fetching? As far as I know everything is properly indexed.. I am confused because the second query only takes a blazing 3ms to run.

    Read the article

  • Should I go with Varnish instead of nginx?

    - by gotts
    I really like nginx. But recently I've found that varnish gives you an opportunity to implement smart caching revers proxy layer(with URL purging). I have a cluster of mongrels which are pretty resource-intensive so if this caching layer can remove some load from mongrels this can be a great thing. I didn't find a way to implement the caching layer(with for application pages; static content is cacheable of course) same with nginx.. Should I use Varnish instead? What would you recommend?

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >