Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 158/530 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • How to shift pixels of a pixmap efficient in Qt4

    - by stanleyxu2005
    Hello, I have implemented a marquee text widget using Qt4. I painted the text content onto a pixmap first. And then paint a portion of this pixmap onto a paint device by calling painter.drawTiledPixmap(offsetX, offsetY, myPixmap) My Imagination is that, Qt will fill the whole marquee text rectangle with the content from myPixmap. Is there a ever faster way, to shift all existing content to left by 1px and than fill the newly exposed 1px wide and N-px high area with the content from myPixmap?

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • any faster alternative??

    - by kaushik
    I have to read a file from a particular line number and i know the line number say "n": i have been thinking of two choice: 1)for i in range(n) fname.readline() k=readline() print k 2)i=0 for line in fname: dictionary[i]=line i=i+1 but i want to know faster alternative as i might have to perform this on different files 20000 times. is there is any other better alternatives?? thanking u

    Read the article

  • Know of any Java garbage collection log analysis tools?

    - by braveterry
    I'm looking for a tool or a script that will take the console log from my web app, parse out the garbage collection information and display it in a meaningful way. I'm starting up on a Sun Java 1.4.2 JVM with the following flags: -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails The log output looks like this: 54.736: [Full GC 54.737: [Tenured: 172798K->18092K(174784K), 2.3792658 secs] 257598K->18092K(259584K), [Perm : 20476K->20476K(20480K)], 2.4715398 secs] Making sense of a few hundred of these kinds of log entries would be much easier if I had a tool that would visually graph garbage collection trends.

    Read the article

  • when is java faster than c++ (or when is JIT faster then precompiled)?

    - by kostja
    I have heard that under certain circumstances, Java programs or rather parts of java programs are able to be executed faster than the "same" code in C++ (or other precompiled code) due to JIT optimizations. This is due to the compiler being able to determine the scope of some variables, avoid some conditionals and pull similar tricks at runtime. Could you give an (or better - some) example, where this applies? And maybe outline the exact conditions under which the compiler is able to optimize the bytecode beyond what is possible with precompiled code? NOTE : This question is not about comparing Java to C++. Its about the possibilities of JIT compiling. Please no flaming. I am also not aware of any duplicates. Please point them out if you are.

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • Code runs 6 times slower with 2 threads than with 1

    - by Edward Bird
    So I have written some code to experiment with threads and do some testing. The code should create some numbers and then find the mean of those numbers. I think it is just easier to show you what I have so far. I was expecting with two threads that the code would run about 2 times as fast. Measuring it with a stopwatch I think it runs about 6 times slower! void findmean(std::vector<double>*, std::size_t, std::size_t, double*); int main(int argn, char** argv) { // Program entry point std::cout << "Generating data..." << std::endl; // Create a vector containing many variables std::vector<double> data; for(uint32_t i = 1; i <= 1024 * 1024 * 128; i ++) data.push_back(i); // Calculate mean using 1 core double mean = 0; std::cout << "Calculating mean, 1 Thread..." << std::endl; findmean(&data, 0, data.size(), &mean); mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Repeat, using two threads std::vector<std::thread> thread; std::vector<double> result; result.push_back(0.0); result.push_back(0.0); std::cout << "Calculating mean, 2 Threads..." << std::endl; // Run threads uint32_t halfsize = data.size() / 2; uint32_t A = 0; uint32_t B, C, D; // Split the data into two blocks if(data.size() % 2 == 0) { B = C = D = halfsize; } else if(data.size() % 2 == 1) { B = C = halfsize; D = hsz + 1; } // Run with two threads thread.push_back(std::thread(findmean, &data, A, B, &(result[0]))); thread.push_back(std::thread(findmean, &data, C, D , &(result[1]))); // Join threads thread[0].join(); thread[1].join(); // Calculate result mean = result[0] + result[1]; mean /= (double)data.size(); // Print result std::cout << " Mean=" << mean << std::endl; // Return return EXIT_SUCCESS; } void findmean(std::vector<double>* datavec, std::size_t start, std::size_t length, double* result) { for(uint32_t i = 0; i < length; i ++) { *result += (*datavec).at(start + i); } } I don't think this code is exactly wonderful, if you could suggest ways of improving it then I would be grateful for that also.

    Read the article

  • Queue-like data structure with fast search and insertion

    - by Max
    I need a datastructure with the following properties: It contains integer numbers, no duplicates. After it reaches the maximal size the first element is removed. So if the capacity is 3, then this is how it would look when putting in it sequential numbers: {}, {1}, {1, 2}, {1, 2, 3}, {2, 3, 4}, {3, 4, 5} etc. Only two operations are needed: inserting a number into this container (INSERT) and checking if the number is already in the container (EXISTS). The number of EXISTS operations is expected to be approximately 2 * number of INSERT operations. I need these operations to be as fast as possible. What would be the fastest data structure or combination of data structures for this scenario?

    Read the article

  • Scalability of Ruby on Rails versus PHP

    - by Daniel
    Can anyone comment on which is more scalable between RoR and PHP? I have heard that RoR is less scalable than PHP since RoR has a little more overhead with its MVC framework while PHP is more low level and lighter. This is a bit vague - can anyone explain better?

    Read the article

  • Cocoa - does CGDataProviderCopyData() actually copy the bytes? Or just the pointer?

    - by jtrim
    I'm running that method in quick succession as fast as I can, and the faster the better, so obviously if CGDataProviderCopyData() is actually copying the data byte-for-byte, then I think there must be a faster way to directly access that data...it's just bytes in memory. Anyone know for sure if CGDataProviderCopyData() actually copies the data? Or does it just create a new pointer to the existing data?

    Read the article

  • NHibernate unintentional lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

  • No improvement in speed when using Ehcache with Hibernate

    - by paddydub
    I'm getting no improvement in speed when using Ehcache with Hibernate Here are the results I get when i run the test below. The test is reading 80 Stop objects and then the same 80 Stop objects again using the cache. On the second read it is hitting the cache, but there is no improvement in speed. Any idea's on what I'm doing wrong? Speed Test: First Read: Reading stops 1-80 : 288ms Second Read: Reading stops 1-80 : 275ms Cache Info: elementsInMemory: 79 elementsInMemoryStore: 79 elementsInDiskStore: 0 JunitCacheTest public class JunitCacheTest extends TestCase { static Cache stopCache; public void testCache() { ApplicationContext context = new ClassPathXmlApplicationContext("beans-hibernate.xml"); StopDao stopDao = (StopDao) context.getBean("stopDao"); CacheManager manager = new CacheManager(); stopCache = (Cache) manager.getCache("ie.dataStructure.Stop.Stop"); //First Read for (int i=1; i<80;i++) { Stop toStop = stopDao.findById(i); } //Second Read for (int i=1; i<80;i++) { Stop toStop = stopDao.findById(i); } System.out.println("elementsInMemory " + stopCache.getSize()); System.out.println("elementsInMemoryStore " + stopCache.getMemoryStoreSize()); System.out.println("elementsInDiskStore " + stopCache.getDiskStoreSize()); } public static Cache getStopCache() { return stopCache; } } HibernateStopDao @Repository("stopDao") public class HibernateStopDao implements StopDao { private SessionFactory sessionFactory; @Transactional(readOnly = true) public Stop findById(int stopId) { Cache stopCache = JunitCacheTest.getStopCache(); Element cacheResult = stopCache.get(stopId); if (cacheResult != null){ return (Stop) cacheResult.getValue(); } else{ Stop result =(Stop) sessionFactory.getCurrentSession().get(Stop.class, stopId); Element element = new Element(result.getStopID(),result); stopCache.put(element); return result; } } } ehcache.xml <cache name="ie.dataStructure.Stop.Stop" maxElementsInMemory="1000" eternal="false" timeToIdleSeconds="5200" timeToLiveSeconds="5200" overflowToDisk="true"> </cache> stop.hbm.xml <class name="ie.dataStructure.Stop.Stop" table="stops" catalog="hibernate3" mutable="false" > <cache usage="read-only"/> <comment></comment> <id name="stopID" type="int"> <column name="STOPID" /> <generator class="assigned" /> </id> <property name="coordinateID" type="int"> <column name="COORDINATEID" not-null="true"> <comment></comment> </column> </property> <property name="routeID" type="int"> <column name="ROUTEID" not-null="true"> <comment></comment> </column> </property> </class> Stop public class Stop implements Comparable<Stop>, Serializable { private static final long serialVersionUID = 7823769092342311103L; private Integer stopID; private int routeID; private int coordinateID; }

    Read the article

  • Sql query: use where in or foreach?

    - by phenevo
    Hi, I'm using query, where the piece is: ...where code in ('va1','var2'...') I have about 50k of this codes. It was working when I has 30k codes, but know I get: The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partition I think that problem is related with IN... So now I'm planning use foreach(string code in codes) ...where code =code Is it good Idea ??

    Read the article

  • advantages of Zend_Db_Table vs raw (My)SQL?

    - by sunwukung
    Currently working on a new Zend application and developing the Model. Having worked with Zend_Db_Table before, I opted to replace references in the Model to the Table API with a custom SQL script to take care of data access duties. Now I'm looking at developing a new application/domain model, and I wanted to get some feedback from people re: their experiences with Zend_Db API vs raw SQL, and use cases where it would be preferable to use the API. From a project perspective, the DB platform is unlikely to change from MySQL - so it doesn't need to be particularly abstract - and I assume writing a custom SQL API will be more performant than the assorted classes the Zend DB API requires.

    Read the article

  • Is there a lightweight datagrid alternative in Flex ?

    - by Wayne
    What is the most performant way of displaying a table of data in Flex? Are there alternatives to the native Flex Datagrid Component? Alternatives that are noted for their rendering speed? Are there other ways to display a table? I have a datagrid with roughly 70 lines and 7 columns of simple text data. This is currently created and loaded in memory. This is being refreshed rapidly (about 800 msec) and there is a slight lag in other animations when it is rendering the table... So I am trying to cut down this render time.

    Read the article

  • SQL Server query problem

    - by user335160
    I want to achieved the results shown in the attached image. The Table Structure and Data are the ffg below: Table Relationship Overall IB Limit->one to many-> Facility Limit Facility Limit->one to many-> Facility Sub Limit Tables Structure and Data Overall IB Limit Id SCAF Reference Approval Date 1 NEW-001 January 1, 2011 2 NEW-002 January 2, 2011 3 NEW-003 January 3, 2011 ---------------------------- Facility Limit Id OverallIBLimitId Product Type 1 1 RPA 2 1 CG 3 2 RPA 4 3 CG ---------------------------- Facility Sub Limit Id FacilityLimitId Sub-Limit Type Amount Tenor Status Status Date 1 1 RPA at max 2,000,0000.00 2 months Approved January 5, 2011 2 1 Oil 3,000,0000.00 3 yrs Approved January 5, 2011 3 2 CG at minor 4,000,0000.00 1 yr Approved January 5, 2011 4 2 CG at max 5,000,0000.00 6 months Approved January 5, 2011 5 2 Flood Component 1 5,000,0000.00 6 months Approved January 5, 2011 6 2 Flood Component 2 6,000,0000.00 3 yrs Approved January 5, 2011 7 3 RPA at minor 1,000,0000.00 6 months Approved January 5, 2011 8 4 One-Off 1,000,0000.00 6 months Approved January 5, 2011

    Read the article

  • Faster way to clone.

    - by AngryHacker
    I am trying to optimize a piece of code that clones an object: #region ICloneable public object Clone() { MemoryStream buffer = new MemoryStream(); BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(buffer, this); // takes 3.2 seconds buffer.Position = 0; return formatter.Deserialize(buffer); // takes 2.1 seconds } #endregion Pretty standard stuff. The problem is that the object is pretty beefy and it takes 5.4 seconds (according ANTS Profiler - I am sure there is the profiler overhead, but still). Is there a better and faster way to clone?

    Read the article

  • Node & Redis: Crucial Design Issues in Production Mode

    - by Ali
    This question is a hybrid one, being both technical and system design related. I'm developing the backend of an application that will handle approx. 4K request per second. We are using Node.js being super fast and in terms of our database struction we are using MongoDB, with Redis being a layer between Node and MongoDB handling volatile operations. I'm quite stressed because we are expecting concurrent requests that we need to handle carefully and we are quite close to launch. However I do not believe I've applied the correct approach on redis. I have a class Student, and they constantly change stages(such as 'active', 'doing homework','in lesson' etc. Thus I created a Redis DB for each state. (1 for being 'active', 2 for being 'doing homework'). Above I have the structure of the 'active' students table; xa5p - JSON stringified object #1 pQrW - JSON stringified object #2 active_student_table - {{studentId:'xa5p'}, {studentId:'pQrW'}} Since there is no 'select all keys method' in Redis, I've been suggested to use a set such that when I run command 'smembers' I receive the keys and later on do 'get' for each id in order to find a specific user (lets say that age older than 15). I've been also suggested that in fact I used never use keys in production mode. My question is, no matter how 'conceptual' it is, what specific things I should avoid doing in Node & Redis in production stage?. Are they any issues related to my design? Students must be objects and I sure can list them in a list but I haven't done yet. Is it that crucial in production stage?

    Read the article

  • What's a better choice for SQL-backed number crunching - Ruby 1.9, Python 2, Python 3, or PHP 5.3?

    - by Ivan
    Crterias of 'better': fast im math and simple (little of fields, many records) db transactions, convenient to develop/read/extend, flexible, connectible. The task is to use a common web development scripting language to process and calculate long time series and multidimensional surfaces (mostly selectint/inserting sets of floats and dong maths with rhem). The choice is Ruby 1.9, Python 2, Python 3, PHP 5.3, Perl 5.12, JavaScript (node.js). All the data is to be stored in a relational database (due to its heavily multidimensional nature), all the communication with outer world is to be done by means of web services.

    Read the article

  • Using VirtualMode on a DataGridView when the number of rows/columns isn't known

    - by Nathan Baulch
    I need to display an unknown length sequence of dictionaries with unknown keys efficiently in a data grid. This sequence is the result of a potentially slow LINQ query that could contain any number of results. At first I thought that VirtualMode on DataGridView was what I was looking for but it appears that the number of rows and columns must be known upfront. I tried adding a single row and column then adding more as needed from the CellValueNeeded event but this doesn't work. Is this even possible with VirtualMode? Or do I need to estimate how many rows are visible on the screen and manually build up the rows/columns? And if so, how do I ensure that a vertical scrollbar is present and react appropriately when a user uses it?

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >