Search Results

Search found 12988 results on 520 pages for 'performance'.

Page 157/520 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Scalability of Ruby on Rails versus PHP

    - by Daniel
    Can anyone comment on which is more scalable between RoR and PHP? I have heard that RoR is less scalable than PHP since RoR has a little more overhead with its MVC framework while PHP is more low level and lighter. This is a bit vague - can anyone explain better?

    Read the article

  • No improvement in speed when using Ehcache with Hibernate

    - by paddydub
    I'm getting no improvement in speed when using Ehcache with Hibernate Here are the results I get when i run the test below. The test is reading 80 Stop objects and then the same 80 Stop objects again using the cache. On the second read it is hitting the cache, but there is no improvement in speed. Any idea's on what I'm doing wrong? Speed Test: First Read: Reading stops 1-80 : 288ms Second Read: Reading stops 1-80 : 275ms Cache Info: elementsInMemory: 79 elementsInMemoryStore: 79 elementsInDiskStore: 0 JunitCacheTest public class JunitCacheTest extends TestCase { static Cache stopCache; public void testCache() { ApplicationContext context = new ClassPathXmlApplicationContext("beans-hibernate.xml"); StopDao stopDao = (StopDao) context.getBean("stopDao"); CacheManager manager = new CacheManager(); stopCache = (Cache) manager.getCache("ie.dataStructure.Stop.Stop"); //First Read for (int i=1; i<80;i++) { Stop toStop = stopDao.findById(i); } //Second Read for (int i=1; i<80;i++) { Stop toStop = stopDao.findById(i); } System.out.println("elementsInMemory " + stopCache.getSize()); System.out.println("elementsInMemoryStore " + stopCache.getMemoryStoreSize()); System.out.println("elementsInDiskStore " + stopCache.getDiskStoreSize()); } public static Cache getStopCache() { return stopCache; } } HibernateStopDao @Repository("stopDao") public class HibernateStopDao implements StopDao { private SessionFactory sessionFactory; @Transactional(readOnly = true) public Stop findById(int stopId) { Cache stopCache = JunitCacheTest.getStopCache(); Element cacheResult = stopCache.get(stopId); if (cacheResult != null){ return (Stop) cacheResult.getValue(); } else{ Stop result =(Stop) sessionFactory.getCurrentSession().get(Stop.class, stopId); Element element = new Element(result.getStopID(),result); stopCache.put(element); return result; } } } ehcache.xml <cache name="ie.dataStructure.Stop.Stop" maxElementsInMemory="1000" eternal="false" timeToIdleSeconds="5200" timeToLiveSeconds="5200" overflowToDisk="true"> </cache> stop.hbm.xml <class name="ie.dataStructure.Stop.Stop" table="stops" catalog="hibernate3" mutable="false" > <cache usage="read-only"/> <comment></comment> <id name="stopID" type="int"> <column name="STOPID" /> <generator class="assigned" /> </id> <property name="coordinateID" type="int"> <column name="COORDINATEID" not-null="true"> <comment></comment> </column> </property> <property name="routeID" type="int"> <column name="ROUTEID" not-null="true"> <comment></comment> </column> </property> </class> Stop public class Stop implements Comparable<Stop>, Serializable { private static final long serialVersionUID = 7823769092342311103L; private Integer stopID; private int routeID; private int coordinateID; }

    Read the article

  • Missing 'DomContentLoaded' and 'load' time information in Firebug's Net Panel.

    - by stony_dreams
    Hello, Firebug is awesome in reporting the relative time when an HTTP request was made with respect to the 'DomContentLoaded' and 'load' time. However, once the 'load' event occurs (seen by the red line on the timeline), the requests thereafter do not have any information about how later they occurred with respect to the two events. To confuse things, these requests (usually at the bottom of the timeline) appear to have started right at the beginning of the page load. Could somebody shed some light on what should i infer when i see such entries in the timeline which do not have information about the 'DomContentLoaded' and 'load' event times and appear to have occurred after the page load event, still net panel shows that they started at the beginning? Thanks!

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

  • when is java faster than c++ (or when is JIT faster then precompiled)?

    - by kostja
    I have heard that under certain circumstances, Java programs or rather parts of java programs are able to be executed faster than the "same" code in C++ (or other precompiled code) due to JIT optimizations. This is due to the compiler being able to determine the scope of some variables, avoid some conditionals and pull similar tricks at runtime. Could you give an (or better - some) example, where this applies? And maybe outline the exact conditions under which the compiler is able to optimize the bytecode beyond what is possible with precompiled code? NOTE : This question is not about comparing Java to C++. Its about the possibilities of JIT compiling. Please no flaming. I am also not aware of any duplicates. Please point them out if you are.

    Read the article

  • Cocoa - does CGDataProviderCopyData() actually copy the bytes? Or just the pointer?

    - by jtrim
    I'm running that method in quick succession as fast as I can, and the faster the better, so obviously if CGDataProviderCopyData() is actually copying the data byte-for-byte, then I think there must be a faster way to directly access that data...it's just bytes in memory. Anyone know for sure if CGDataProviderCopyData() actually copies the data? Or does it just create a new pointer to the existing data?

    Read the article

  • Sql query: use where in or foreach?

    - by phenevo
    Hi, I'm using query, where the piece is: ...where code in ('va1','var2'...') I have about 50k of this codes. It was working when I has 30k codes, but know I get: The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partition I think that problem is related with IN... So now I'm planning use foreach(string code in codes) ...where code =code Is it good Idea ??

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Scalable Database Tagging Schema

    - by Longpoke
    EDIT: To people building tagging systems. Don't read this. It is not what you are looking for. I asked this when I wasn't aware that RDBMS all have their own optimization methods, just use a simple many to many scheme. I have a posting system that has millions of posts. Each post can have an infinite number of tags associated with it. Users can create tags which have notes, date created, owner, etc. A tag is almost like a post itself, because people can post notes about the tag. Each tag association has an owner and date, so we can see who added the tag and when. My question is how can I implement this? It has to be fast searching posts by tag, or tags by post. Also, users can add tags to posts by typing the name into a field, kind of like the google search bar, it has to fill in the rest of the tag name for you. I have 3 solutions at the moment, but not sure which is the best, or if there is a better way. Note that I'm not showing the layout of notes since it will be trivial once I get a proper solution for tags. Method 1. Linked list tagId in post points to a linked list in tag_assoc, the application must traverse the list until flink=0 post: id, content, ownerId, date, tagId, notesId tag_assoc: id, tagId, ownerId, flink tag: id, name, notesId Method 2. Denormalization tags is simply a VARCHAR or TEXT field containing a tab delimited array of tagId:ownerId. It cannot be a fixed size. post: id, content, ownerId, date, tags, notesId tag: id, name, notesId Method 3. Toxi (from: http://www.pui.ch/phred/archives/2005/04/tags-database-schemas.html, also same thing here: http://stackoverflow.com/questions/20856/how-do-you-recommend-implementing-tags-or-tagging) post: id, content, ownerId, date, notesId tag_assoc: ownerId, tagId, postId tag: id, name, notesId Method 3 raises the question, how fast will it be to iterate through every single row in tag_assoc? Methods 1 and 2 should be fast for returning tags by post, but for posts by tag, another lookup table must be made. The last thing I have to worry about is optimizing searching tags by name, I have not worked that out yet. I made an ASCII diagram here: http://pastebin.com/f1c4e0e53

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Which memory related Tomcat JVM startup parameters are worth tuning?

    - by knorv
    I'm trying to understand the fine art of tuning Tomcat memory settings. In this quest I have the following three questions: Which memory related JVM startup parameters are worth setting when running Tomcat? Why? What are useful rule-of-thumbs when fine-tuning the memory settings for a Tomcat installation? How do you monitor the memory consumption of your live Tomcat installation?

    Read the article

  • Why are compilers so stupid?

    - by martinus
    I always wonder why compilers can't figure out simple things that are obvious to the human eye. They do lots of simple optimizations, but never something even a little bit complex. For example, this code takes about 6 seconds on my computer to print the value zero (using java 1.6): int x = 0; for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) { x += x + x + x + x + x; } System.out.println(x); It is totally obvious that x is never changed so no matter how often you add 0 to itself it stays zero. So the compiler could in theory replace this with System.out.println(0). Or even better, this takes 23 seconds: public int slow() { String s = "x"; for (int i = 0; i < 100000; ++i) { s += "x"; } return 10; } First the compiler could notice that I am actually creating a string s of 100000 "x" so it could automatically use s StringBuilder instead, or even better directly replace it with the resulting string as it is always the same. Second, It does not recognize that I do not actually use the string at all, so the whole loop could be discarded! Why, after so much manpower is going into fast compilers, are they still so relatively dumb? EDIT: Of course these are stupid examples that should never be used anywhere. But whenever I have to rewrite a beautiful and very readable code into something unreadable so that the compiler is happy and produces fast code, I wonder why compilers or some other automated tool can't do this work for me.

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Using VirtualMode on a DataGridView when the number of rows/columns isn't known

    - by Nathan Baulch
    I need to display an unknown length sequence of dictionaries with unknown keys efficiently in a data grid. This sequence is the result of a potentially slow LINQ query that could contain any number of results. At first I thought that VirtualMode on DataGridView was what I was looking for but it appears that the number of rows and columns must be known upfront. I tried adding a single row and column then adding more as needed from the CellValueNeeded event but this doesn't work. Is this even possible with VirtualMode? Or do I need to estimate how many rows are visible on the screen and manually build up the rows/columns? And if so, how do I ensure that a vertical scrollbar is present and react appropriately when a user uses it?

    Read the article

  • Faster way to clone.

    - by AngryHacker
    I am trying to optimize a piece of code that clones an object: #region ICloneable public object Clone() { MemoryStream buffer = new MemoryStream(); BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(buffer, this); // takes 3.2 seconds buffer.Position = 0; return formatter.Deserialize(buffer); // takes 2.1 seconds } #endregion Pretty standard stuff. The problem is that the object is pretty beefy and it takes 5.4 seconds (according ANTS Profiler - I am sure there is the profiler overhead, but still). Is there a better and faster way to clone?

    Read the article

  • Is there a lightweight datagrid alternative in Flex ?

    - by Wayne
    What is the most performant way of displaying a table of data in Flex? Are there alternatives to the native Flex Datagrid Component? Alternatives that are noted for their rendering speed? Are there other ways to display a table? I have a datagrid with roughly 70 lines and 7 columns of simple text data. This is currently created and loaded in memory. This is being refreshed rapidly (about 800 msec) and there is a slight lag in other animations when it is rendering the table... So I am trying to cut down this render time.

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • C++ & proper TDD

    - by Kotti
    Hi! I recently tried developing a small-sized project in C# and during the whole project our team used the Test-Driven-Development (TDD) technique (xunit, moq). I really think this was awesome, because (when paired with C#) this approach allowed to relax when coding, relax when projecting and relax when refactoring. I suspect that all this TDD-stuff actually simplifies the coding process and, well, it allowed (eventually, for me) to get the same result with fewer brain cells working. Right after that I tried using TDD paired with C++ (I used Google Test and Google Mock libraries), and, I don't know why but I actually think that TDD here was a step back in terms of rapid application development. I had some moments when I had to spend huge amounts of time thinking of my tests, building proper mocks, rebuilding them and swearing at my monitor. And, well, I obviously can't ask something like "what I did wrong?" or "what was wrong in my approach?", because I don't know what to describe. But if there are any people who are used to TDD in C++ (and, probably C#) too, could you please advise me how to do this properly. Framework recommendations, architecture approaches, plain coding advices - if you are experienced in TDD & C++, please respond.

    Read the article

  • What type of websites does memcached speed up

    - by Saif Bechan
    I have read this article about 400% boost of your website. This is done by a combination of nginx and memcached. The how-to part of this website is quite good, but i mis the part where it says to what types of websites this applies. I know nginx is a http engine, I need no explanation for that. I thought memcached had something to do with caching database result. However i don't understand what this has to do with the http request, can someone please explain that to me. Another question I have is for what types of websites is this used. I have a website where the important part of the website consist of data that changes often. Often being minutes. Will this method still apply to me, or should I just stick with the basic boring setup of apache and nothing else.

    Read the article

  • MSSQL Server high CPU and I/O activity database tuning

    - by zapping
    Our application tends to be running very slow recently. On debugging and tracing found out that the process is showing high cpu cycles and SQL Server shows high I/O activity. Can you please guide as to how it can be optimised? The application is now about an year old and the database file sizes are not very big or anything. The database is set to auto shrink. Its running on win2003, SQL Server 2005 and the application is a web application coded in c# i.e vs2005

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >