Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 24/763 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How does Android emulator performance compare to real device performance?

    - by uj2
    I'm looking into writing an Android game, tough I don't curerntly own an Android device. For those of you who own a device, how does the performance on the emulator relate to real device performance? I'm especially interested in graphics related tasks. This obviously depends on both the machine running the emulator, and the specific device in question, but I'm talking rough numbers here. This question is a duplicate, but since that post is heavily outdated, I figured it's irrelevant by now.

    Read the article

  • Combining two UPDATE Commands - Performance ?

    - by Johannes
    If I want to update two rows in a MySQL table, using the following two command: UPDATE table SET Col = Value1 WHERE ID = ID1 UPDATE table SET Col = Value2 WHERE ID = ID2` I usually combine them into one command, so that I do not to have to contact the MySQL server twice from my C client: UPDATE table SET Col = IF( ID = ID1 , Value1 , Value2) WHERE ID=ID1 OR ID=ID2 Is this really a performance gain? Background Information: I am using a custom made fully C written high-performance heavily loaded webserver.

    Read the article

  • JVM performance test suite

    - by pierr
    Hi, I have just ported phoneME to our MIPS platform. I feel it runs not that fast; however, is there any performance test suite I can run against to get some quantitative measurement of the performance? I might need to pick some weak points for optimization. In addition, what are common criterions used to evalute a JVM ?

    Read the article

  • Project Performance Evaluation and Finding Weak Areas

    - by pramodc84
    I'm working in J2EE web project, which has lots of Java, SQL scripts, JS, AJAX stuff. Its been 5 years for project still running fine. I have assigned with work of performance evaluation on the project as there might be some memory usage issues, DB fetching logic delays and other similar weak performance areas. From where should I begin? Any best practices to make project better?

    Read the article

  • How to cache queries in EJB and return result efficient (performance POV)

    - by Maxym
    I use JBoss EJB 3.0 implementation (JBoss 4.2.3 server) At the beginning I created native query all the time using construction like Query query = entityManager.createNativeQuery("select * from _table_"); Of couse it is not that efficient, I performed some tests and found out that it really takes a lot of time... Then I found a better way to deal with it, to use annotation to define native queries: @NamedNativeQuery( name = "fetchData", value = "select * from _table_", resultClass=Entity.class ) and then just use it Query query = entityManager.createNamedQuery("fetchData"); the performance of code line above is two times better than where I started from, but still not that good as I expected... then I found that I can switch to Hibernate annotation for NamedNativeQuery (anyway, JBoss's implementation of EJB is based on Hibernate), and add one more thing: @NamedNativeQuery( name = "fetchData2", value = "select * from _table_", resultClass=Entity.class, readOnly=true) readOnly - marks whether the results are fetched in read-only mode or not. It sounds good, because at least in this case of mine I don't need to update data, I wanna just fetch it for report. When I started server to measure performance I noticed that query without readOnly=true (by default it is false) returns result with each iteration better and better, and at the same time another one (fetchData2) works like "stable" and with time difference between them is shorter and shorter, and after 5 iterations speed of both was almost the same... The questions are: 1) is there any other way to speed query using up? Seems that named queries should be prepared once, but I can't say it... In fact if to create query once and then just use it it would be better from performance point of view, but it is problematic to cache this object, because after creating query I can set parameters (when I use ":variable" in query), and it changes query object (isn't it?). well, is here any way to cache them? Or named query is the best option I can use? 2) any other approaches how to make results retrieveng faster. I mean, for instance I don't need those Entities to be attached, I won't update them, all I need is just fetch collection of data. Maybe readOnly is the only available way, so I can't speed it up, but who knows :) P.S. I don't ask about DB performance, all I need now is how not to create query all the time, so use it efficient, and to "allow" EJB to do less job with the same result concerning data returning.

    Read the article

  • HTML 5 Canvas performance

    - by Vilius
    Hello there! I'm just started on playing around with the canvas HTML5-object. For the sake of performance tests, I have made a little ping pong game (http://bit.ly/arTPut). Apart from my quick'n'dirty programming skills, I believe, that there are also some performance boosts, I haven't used. Especially, the ball seams to be blue with a little red-touch, but by my decleration it should be yellow. Would be very nice, if someone could help me! Greetings, Vilius

    Read the article

  • SQL INSERT performance omitting field names?

    - by Marco Demaio
    Does anyone knows if removing the field names from an INSERT query results in some performance improvements? I mean is this: INSERT INTO table1 VALUES (value1, value2, ...) faster for DB to be accomplished rather than doing this: INSERT INTO table1 (field1, field2, ...) VALUES (value1, value2, ...) ? I know it might be probably a meaningless performance difference, but just to know.

    Read the article

  • Performance impact of not implementing relationships at the database level?

    - by JVerstry
    Let's imagine a data model with customers and invoices. There is a 1 to n relationship between a customer and its invoices. We uses an ORM (like Hibernate). One can explicitely implement the 1-n relationship (using JPA for example) or not. If not, then one must do a bit more work to fetch invoices. However, it is much easier to maintain, improve and develop the data model of applications where relationships between objects are not explicitely implemented in the database. My question is, has anyone noticed a significant performance impact when not implementing the relationships in the database?

    Read the article

  • Performance Cost of a Memcopy in C/C++

    - by Cenoc
    So whenever I write code I always think about the performance implications. I've often wondered, what is the "cost" of using a memcopy relative to other functions in terms of performance? For example, I may be writing a sequence of numbers to a static buffer and concentrate on a frame within the buffer, in order to keep the frame once I get to the end of the buffer, I might memcopy all of it to the beginning OR I can implement an algorithm to amortize the computation.

    Read the article

  • C: performance of assignments, binary operations, et cetera...

    - by Shinka
    I've heard many things about performance in C; casting is slow compared to normal assignments, functional call is slow, binary operation are much faster than normal operations, et cetera... I'm sure some of those things are specific to the architecture, and compiler optimization might make a huge difference, but I would like to see a chart to get a general idea what I should do and what I should avoid to write high-performance programs. Is there such a chart (or a website, a book, anything) ?

    Read the article

  • What kind of performance issues does multiple instances of the exact same object have on a game?

    - by lggmonclar
    I'm fairly new to programming, and I've pretty much learned all the things I know on the go, while working on projects. The problem is that there some things that I just don't know where to begin searching. My question is about performance, and how can multiple instances of the same object affect it -- Specifically, I'm talking about XNA's "GraphicsDevice" class. I have it instanced on four different parts of my game, and in three of those, the object has the exact same values for all the attributes. So, in that case, should I be using the same instance of GraphicsDevice, passing it as a parameter, even if I use it in different classes? I apologize if the question seems redundant, but like I said, I've taught myself most of what I know, so there are quite a few "holes" in my learning process.

    Read the article

  • Algorithm performance

    - by william007
    I am testing an algorithm for different parameters on a computer. I notice the performance fluctuates for each parameters. Say I run for the first time I got 20 ms, second times I got 5ms, third times I got 4ms: But the algorithm should work the same for these 3 times. I am using stopwatch from C# library to count the time, is there a better way to measure the performance without subjecting to those fluctuations?

    Read the article

  • ANTS Performance Profiler 7.0 has been released!

    - by Michaela Murray
    Please join me in welcoming ANTS Performance Profiler 7 to the world of .NET. ANTS Performance Profiler is a .NET code profiling tool. It lets you identify performance bottlenecks within minutes and therefore enables you to optimize your application performance. Version 7.0 includes integrated decompilation: when profiling methods and assemblies with no source code file, you can generate source code right from the profiler interface. You can then browse and navigate this automatically generated source as if it was your own. If you have an assembly's PDB file but no source, integrated decompilation even lets you view line-level timings for each method, pinpointing the exact cause of performance bottlenecks. Integrated decompilation is powered by .NET Reflector, but you don't need Reflector installed to use the functionality. Watch this video to see it in action. Also new in ANTS Performance Profiler 7.0: · Full support for SharePoint 2010 - No need to manually configure profiling for the latest version of SharePoint · Full support for IIS Express · Azure and Amazon EC2 support, enabling you to profile in the cloud Please click here, for more details about the ANTS Performance Profiler 7.0.

    Read the article

  • RAM caching causes severe performance drops

    - by B T
    I have read plenty of threads on memory caching and the standard response of "large cache is good, it shouldn't effect performance", "the kernel knows best". I have recently upgraded from 12.04 to 12.10 and changed from VirtualBox to VMware Workstation and the performance differences are severe (I suspect it is because of the latter). When I am running my virtual machine the system load monitor graph shows less than 50% memory usage generally. System load indicator is showing me that the rest of my RAM is used in the cache all the time. Plain and simple this is the comparison: BEFORE Cache was very sparingly used, pretty much none of my memory usage was the cache Swappiness was 0 (caused my memory to be used first, then swap only if needed) Performance was quite good and logical RAM was used fully first, caching was minimal. I could run enough software to utilize my full 4GB of RAM without any performance degradation whatsoever Swap space was then used as needed which was obviously slower (I am on a HDD) but was still usable when the current program was loaded into memory AFTER Cache is used to fill the full 4GB as soon as my virtual machine is run Swappiness is 0 (same behaviour as before but cache uses full memory straight away) Performance is terrible and unusable while running Ubuntu software Basic things like changing windows takes 2 minutes + Changing screens happens frame by frame over sometimes up to 5 minutes Cannot run an IDE and VM like I could with ease before So basically, any suggestions on how to take my performance back to how it was before while keeping my current setup? My suspicion is VMWare is the problem, but how do I see what is tied to the use of the cache? Surely there is a way to control this behaviour in software as polished as VMware? Thanks EDIT: Could also be important to note that the behaviour differs depending on whether VMware is open or closed. If VMware is open, then the ram will lock at like 50% and 50% cache and go into the complete lock up mentioned above. Contrastingly, if VMware is closed (after being open), then the RAM will continue to rise as it needs / cache will stay as the complete remaining memory and there is no noticeable performance degradation.

    Read the article

  • C# performance analysis- how to count CPU cycles?

    - by Lirik
    Is this a valid way to do performance analysis? I want to get nanosecond accuracy and determine the performance of typecasting: class PerformanceTest { static double last = 0.0; static List<object> numericGenericData = new List<object>(); static List<double> numericTypedData = new List<double>(); static void Main(string[] args) { double totalWithCasting = 0.0; double totalWithoutCasting = 0.0; for (double d = 0.0; d < 1000000.0; ++d) { numericGenericData.Add(d); numericTypedData.Add(d); } Stopwatch stopwatch = new Stopwatch(); for (int i = 0; i < 10; ++i) { stopwatch.Start(); testWithTypecasting(); stopwatch.Stop(); totalWithCasting += stopwatch.ElapsedTicks; stopwatch.Start(); testWithoutTypeCasting(); stopwatch.Stop(); totalWithoutCasting += stopwatch.ElapsedTicks; } Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/10)); Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting/10)); Console.ReadKey(); } static void testWithTypecasting() { foreach (object o in numericGenericData) { last = ((double)o*(double)o)/200; } } static void testWithoutTypeCasting() { foreach (double d in numericTypedData) { last = (d * d)/200; } } } The output is: Avg with typecasting = 468872.3 Avg without typecasting = 501157.9 I'm a little suspicious... it looks like there is nearly no impact on the performance. Is casting really that cheap?

    Read the article

  • Poor Ruby on Rails performance when using nested :include

    - by Jeremiah Peschka
    I have three models that look something like this: class Bucket < ActiveRecord::Base has_many :entries end class Entry < ActiveRecord::Base belongs_to :submission belongs_to :bucket end class Submission < ActiveRecord::Base has_many :entries belongs_to :user end class User < ActiveRecord::Base has_many :submissions end When I retrieve a collection of entries doing something like: @entries = Entry.find(:all, :conditions => ['entries.bucket_id = ?', @bucket], :include => :submission) The performance is pretty quick although I get a large number of extra queries because the view uses the Submission.user object. However, if I add the user to the :include statement, the performance becomes terrible and it takes over a minute to return a total of 50 entries and submissions spread across 5 users. When I run the associated SQL commands, they complete in well under a second. @entries = Entry.find(:all, :conditions => ['entries.bucket_id = ?', @bucket], :include => {:submission => :user}) Why would this second command have such terrible performance compared to the first?

    Read the article

  • best way to set up a VM for development (regarding performance)

    - by raticulin
    I am trying to set up a clean vm I will use in many of my devs. Hopefully I will use it many times and for a long time, so I want to get it right and set it up so performance is as good as possible. I have searched for a list of things to do, but strangely found only older posts, and none here. My requirements are: My host is Vista 32b, and guest is Windows2008 64b, using Vmware Workstation. The VM should also be able to run on a Vmware ESX I cannot move to other products (VirtualBox etc), but info about performance of each one is welcomed for reference. Anyway I guess most advices would apply to other OSs and other VM products. I need network connectivity to my LAN Guest will run many java processes, a DB and perform lots of file I/O What I have found so far is: HOWTO: Squeeze Every Last Drop of Performance Out of Your Virtual PCs: it's and old post, and about Virtual PC, but I guess most things still apply (and also apply to vmware). I guess it makes a difference to disable all unnecessary services, but the ones mentioned in 1 seem like too few, I specifically always disable Windows Search. Any other service I should disable?

    Read the article

  • Rails performance tests "rake test:benchmark" and "rake test:profile" give me errors

    - by go minimal
    I'm trying to run a blank default performance test with Ruby 1.9 and Rails 2.3.5 and I just can't get it to work! What am I missing here??? rails testapp cd testapp script/generate scaffold User name:string rake db:migrate rake test:benchmark - /usr/local/bin/ruby19 -I"lib:test" "/usr/local/lib/ruby19/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/performance/browsing_test.rb" -- --benchmark Loaded suite /usr/local/lib/ruby19/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader Started /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:105:in `rescue in const_missing': uninitialized constant BrowsingTest::STARTED (NameError) from /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:94:in `const_missing' from /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/testing/performance.rb:38:in `run' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:415:in `block (2 levels) in run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:409:in `each' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:409:in `block in run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:408:in `each' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:408:in `run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:388:in `run' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:329:in `block in autorun' rake aborted! Command failed with status (1): [/usr/local/bin/ruby19 -I"lib:test" "/usr/l...]

    Read the article

  • Java Performance measurement

    - by portoalet
    Hi, I am doing some Java performance comparison between my classes, and wondering if there is some sort of Java Performance Framework to make writing performance measurement code easier? I.e, what I am doing now is trying to measure what effect does it have having a method as "synchronized" as in PseudoRandomUsingSynch.nextInt() compared to using an AtomicInteger as my "synchronizer". So I am trying to measure how long it takes to generate random integers using 3 threads accessing a synchronized method looping for say 10000 times. I am sure there is a much better way doing this. Can you please enlighten me? :) public static void main( String [] args ) throws InterruptedException, ExecutionException { PseudoRandomUsingSynch rand1 = new PseudoRandomUsingSynch((int)System.currentTimeMillis()); int n = 3; ExecutorService execService = Executors.newFixedThreadPool(n); long timeBefore = System.currentTimeMillis(); for(int idx=0; idx<100000; ++idx) { Future<Integer> future = execService.submit(rand1); Future<Integer> future1 = execService.submit(rand1); Future<Integer> future2 = execService.submit(rand1); int random1 = future.get(); int random2 = future1.get(); int random3 = future2.get(); } long timeAfter = System.currentTimeMillis(); long elapsed = timeAfter - timeBefore; out.println("elapsed:" + elapsed); } the class public class PseudoRandomUsingSynch implements Callable<Integer> { private int seed; public PseudoRandomUsingSynch(int s) { seed = s; } public synchronized int nextInt(int n) { byte [] s = DonsUtil.intToByteArray(seed); SecureRandom secureRandom = new SecureRandom(s); return ( secureRandom.nextInt() % n ); } @Override public Integer call() throws Exception { return nextInt((int)System.currentTimeMillis()); } } Regards

    Read the article

  • Java performance issue

    - by Colby77
    Hi, I've got a question related to java performance and method execution. In my app there are a lot of place where I have to validate some parameter, so I've written a Validator class and put all the validation methods into it. Here is an example: public class NumberValidator { public static short shortValidator(String s) throws ValidationException{ try{ short sh = Short.parseShort(s); if(sh < 1){ throw new ValidationException(); } return sh; }catch (Exception e) { throw new ValidationException("The parameter is wrong!"); } } ... But I'm thinking about that. Is this OK? It's OO and modularized, but - considering performance - is it a good idea? What if I had awful lot of invocation at the same time? The snippet above is short and fast, but there are some methods that take more time. What happens when there are a lot of calling to a static method or an instance method in the same class and the method is not synchronized? All the calling methods have to fall in line and the JVM executes them sequentially? Is it a good idea to have some class that are identical to the above-mentioned and randomly call their identical methods? I think it is not, because "Don't repeat yourself " and "Duplication is Evil" etc. But what about performance? Thanks is advance.

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • JDBC programms running long time performance issue

    - by phyerbarte
    My program has an issue with Oracle query performance, I believe the SQL have good performance, because it returns quickly in SQLPlus. But when my program has been running for a long time, like 1 week, the SQL query (using JDBC) becomes slower (In my logs, the query time is much longer than when I originally started the program). When I restart my program, the query performance comes back to normal. I think it is could be something wrong with the way I use the preparedStatement, because the SQL I'm using does not use placeholders "?" at all. Just a complex select query. The query process is done by a util class. Here is the pertinent code building the query: public List<String[]> query(String sql, String[] args) { Connection conn = null; conn = openConnection(); conn.setAutocommit(true); .... PreparedStatement preStatm = null; ResultSet rs = null; ....//set preparedstatment arg code rs = preStatm.executeQuery(); .... finally{ //close rs //close prestatm //close connection } } In my case, the args is always null, so it just passes a query sql to this query method. Is that possible this way could slow down the DB query after program long time running? Or I should use statement instead, or just pass args with "?" in the SQL? How can I find out the root cause for my issue? Thanks.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >