Search Results

Search found 1631 results on 66 pages for 'optimize'.

Page 16/66 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • Run Grunt task in Visual Studio Release Build with a bat file

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/19/run-grunt-task-in-visual-studio-release-build-with-a.aspx 1. Add a BeforeBuild in your csproj file. Edit the xml with a text editor. <Target Name="BeforeBuild"> <Exec Condition="'$(Configuration)' == 'Release'" Command="script-optimize.bat" /> </Target> 2. Create the script-optimize.batREM "%~dp0" maps to the directory where this file exists cd %~dp0\..\YourProjectFolder call npm uninstall grunt call npm uninstall grunt call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 grunt typescript requirejs copy less:compile less:mincompileThis grunt command will compile typescript, run the requireJs optimizer, complie and minimize less.3. Make it use the minified code when the Web.config compilation debug is set to false <!-- These CustomCollectFiles actions are used so that the Scripts-Release folder/files are included        when publishing even though they are not project references -->  <Target Name="CustomCollectFiles">    <ItemGroup>      <_CustomFiles Include="Scripts-Release\**\*" />  </ItemGroup>  </Target> That should be all you need to get a Grunt task to minify and combine JS (plus other tasks) in Visual Studio Release build with debug = false. This is a great video of Steve Sanderson talking about SPAs, npm, Knockout, Grunt, Gulp, ect. I highly recommend it.

    Read the article

  • Optimizations employed by ORM's

    - by Kartoch
    I'm teaching JEE, especially JPA, Spring and Spring MVC. As I have not so much experience in large projects, it is difficult to know what to present to students about optimisation of ORM. At the present time, I present some classic optimisation tricks: prepared statements (most of ORM implicitely uses it by default) first and second-level caches "write first, optimize later" it is possible to switch off ORM and send SQL commands directly to the database for very frequent, specialized and costly requests Is there any other point the community see about other way to optimize ORM ? I'm especially interested by DAO patterns...

    Read the article

  • Is there an offline version of Smush.it available?

    - by Jonathon Watney
    Sometimes I use Smush.it via the YSlow Firefox plugin to non-destructively reduce the file size of JPG images. Is there an offline version available that runs on Windows? And if not is there an alternative? The reason I'd like an offline version is that I'd like to optimize images before I deploy them. Currently Smush.it accepts only public facing URLs for images or a web page (via YSlow) and can't access my internal network. That means I have to deploy, optimize, replace images and deploy again. I'd really like to deploy the optimized images on the first deploy. Update: Here's a very similar question.

    Read the article

  • Performance profiler for a java application

    - by Nitin Garg
    I need to optimize a java application. It makes some 3rd party calls. I need some good tool to accurately measure the time taken by individual api calls. To give an idea of complexity- the application takes a data source file containing 10 lakh rows, and it takes around one hour to complete the processing. As a part of processing , it makes some 3rd party calls (including some network calls). I need to identify which calls are taking more time then others, and based on that, find out a way to optimize the application. Any suggestions would be appreciated.

    Read the article

  • Most optimized way to calculate modulus in C

    - by hasanatkazmi
    I have minimize cost of calculating modulus in C. say I have a number x and n is the number which will divide x when n == 65536 (which happens to be 2^16): mod = x % n (11 assembly instructions as produced by GCC) or mod = x & 0xffff which is equal to mod = x & 65535 (4 assembly instructions) so, GCC doesn't optimize it to this extent. In my case n is not x^(int) but is largest prime less than 2^16 which is 65521 as I showed for n == 2^16, bit-wise operations can optimize the computation. What bit-wise operations can I preform when n == 65521 to calculate modulus.

    Read the article

  • MySQL optimized sentence

    - by Ivan
    I have a simple table where I have to extract some records. The problem is that the evaluation function is a very time-consuming stored procedure so I shouldn't to call it twice like in this sentence: SELECT *, slow_sp(row) FROM table WHERE slow_sp(row)>0 ORDER BY dist DESC LIMIT 10 First I thought in optimize like this: SELECT *, slow_sp(row) AS value FROM table WHERE value>0 ORDER BY dist DESC LIMIT 10 But it doesn't works due "value" is not processed when the WHERE clause is evaluated. Any idea to optimize this sentence? Thanks.

    Read the article

  • MYSQL OR vs IN performance

    - by Scott
    I am wondering if there is any difference in regards to performance between the following SELECT ... FROM ... WHERE someFIELD IN(1,2,3,4) SELECT ... FROM ... WHERE someFIELD between 0 AND 5 SELECT ... FROM ... WHERE someFIELD = 1 OR someFIELD = 2 OR someFIELD = 3 ... or will MySQL optimize the SQL in the same way compilers will optimize code ? EDIT: Changed the AND's to OR's for the reason stated in the comments.

    Read the article

  • On Windows XP, programmatically set Pagefile to "No Paging File" on single c: drive

    - by NBPC77
    I'm trying to write a C#/.NET application that optimizes the hard drives for our XP workstations Set pagefile to "No paging file" Reboot Run a defrag utility to optimize the data and apps Create a contiguous page file Reboot, run pagedefrag from Sysinternals I'm really struggling with #1. I delete the following key: SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PagingFiles Upon reboot, the System Control Panel shows "No page file", but c:\pagefile.sys still exists and its in use by the SYSTEM process so I can't delete it and I can't optimize HD. I tried using PendingFileRenamingOperations and that bombs out too. I tried using WMI: Win32_PageFileSetting, but that only lets you set sizes (not zero--defaults to 2MB). Of course, if I do the manual steps outlined above, it works. I think I need an API call to make this happen.

    Read the article

  • Python Profiling In Windows, How do you ignore Builtin Functions

    - by Tim McJilton
    I have not been capable of finding this anywhere online. I was looking to find out using a profiler how to better optimize my code, and when sorting by which functions use up the most time cumulatively, things like str(), print, and other similar widely used functions eat up much of the profile. What is the best way to profile a python program to get the user-defined functions only to see what areas of their code they can optimize? I hope that makes sense, any light you can shed on this subject would be very appreciated.

    Read the article

  • Need help optimizing MYSQL query with join

    - by makeee
    I'm doing a join between the "favorites" table (3 million rows) the "items" table (600k rows). The query is taking anywhere from .3 seconds to 2 seconds, and I'm hoping I can optimize it some. Favorites.faver_profile_id and Items.id are indexed. Instead of using the faver_profile_id index I created a new index on (faver_profile_id,id), which eliminated the filesort needed when sorting by id. Unfortunately this index doesn't help at all and I'll probably remove it (yay, 3 more hours of downtime to drop the index..) Any ideas on how I can optimize this query? In case it helps: Favorite.removed and Item.removed are "0" 98% of the time. Favorite.collection_id is NULL about 80% of the time. SELECT `Item`.`id`, `Item`.`source_image`, `Item`.`cached_image`, `Item`.`source_title`, `Item`.`source_url`, `Item`.`width`, `Item`.`height`, `Item`.`fave_count`, `Item`.`created` FROM `favorites` AS `Favorite` LEFT JOIN `items` AS `Item` ON (`Item`.`removed` = 0 AND `Favorite`.`notice_id` = `Item`.`id`) WHERE ((`faver_profile_id` = 1) AND (`collection_id` IS NULL) AND (`Favorite`.`removed` = 0) AND (`Item`.`removed` = '0')) ORDER BY `Favorite`.`id` desc LIMIT 50;

    Read the article

  • How do you make your Java application memory efficient?

    - by Boune
    How do you optimize the heap size usage of an application that has a lot (millions) of long-lived objects? (big cache, loading lots of records from a db) Use the right data type Avoid java.lang.String to represent other data types Avoid duplicated objects Use enums if the values are known in advance Use object pools String.intern() (good idea?) Load/keep only the objects you need I am looking for general programming or Java specific answers. No funky compiler switch. Edit: Optimize the memory representation of a POJO that can appear millions of times in the heap. Use cases Load a huge csv file in memory (converted into POJOs) Use hibernate to retrieve million of records from a database Resume of answers: Use flyweight pattern Copy on write Instead of loading 10M objects with 3 properties, is it more efficient to have 3 arrays (or other data structure) of size 10M? (Could be a pain to manipulate data but if you are really short on memory...)

    Read the article

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

  • Optimizing Code

    - by Claudiu
    You are given a heap of code in your favorite language which combines to form a rather complicated application. It runs rather slowly, and your boss has asked you to optimize it. What are the steps you follow to most efficiently optimize the code? What strategies have you found to be unsuccessful when optimizing code? Re-writes: At what point do you decide to stop optimizing and say "This is as fast as it'll get without a complete re-write." In what cases would you advocate a simple complete re-write anyway? How would you go about designing it?

    Read the article

  • c++ Sorting a vector based on values of other vector, or what's faster?

    - by pollux
    Hi, There are a couple of other posts about sorting a vector A based on values in another vector B. Most of the other answers tell to create a struct or a class to combine the values into one object and use std::sort. Though I'm curious about the performance of such solutions as I need to optimize code which implements bubble sort to sort these two vectors. I'm thinking to use a vector<pair<int,int>> and sort that. I'm working on a blob-tracking application (image analysis) where I try to match previously tracked blobs against newly detected blobs in video frames where I check each of the frames against a couple of previously tracked frames and of course the blobs I found in previous frames. I'm doing this at 60 times per second (speed of my webcam). Any advice on optimizing this is appreciated. The code I'm trying to optimize can be shown here: http://code.google.com/p/projectknave/source/browse/trunk/knaveAddons/ofxBlobTracker/ofCvBlobTracker.cpp?spec=svn313&r=313 Thanks

    Read the article

  • why does InnoDB keep on growing without for every update?

    - by Akash Kava
    I have a table which consists of heavy blobs, and I wanted to conduct some tests on it. I know deleted space is not reclaimed by innodb, so I decided to reuse existing records by updating its own values instead of createing new records. But I noticed, whether I delete and insert a new entry, or I do UPDATE on existing ROW, InnoDB keeps on growing. Assuming I have 100 Rows, each Storing 500KB of information, My InnoDB size is 10MB, now when I call UPDATE on all rows (no insert/ no delete), the innodb grows by ~8MB for every run I do. All I am doing is I am storing exactly 500KB of data in each row, with little modification, and size of blob is fixed. What can I do to prevent this? I know about optimize table, but I cant do it because on regular usage, the table is going to be 60-100GB big, and running optimize will just stall entire server.

    Read the article

  • .net load balancing for server

    - by user1439111
    Some time ago I wrote server software which is currently running at it's max. (3k users average). So I decided to rewrite certain parts so I can run the software at another server to balance it's load. I can't simply start another instance of the server since there is some data which has to be available to all users. So I was thinking of creating a small manager and all the servers connect and send their (relevant)data to the manager. But it also got me thinking about another problem. The manager could also reach it's limits which is exactly what i'm trying to prevent in the future. So I would like to know how I could fix this problem. (I have already tried to optimize critical parts of the software but I can't optimize it forever)

    Read the article

  • Interpreted vs. Compiled Languages for Web Sites (PHP, ASP, Perl, Python, etc.)

    - by Andrew Swift
    I build database-driven web sites. Previously I have used Perl or PHP with MySQL. Now I am starting a big new project, and I want to do it in the way that will result in the most responsive possible site. I have seen several pages here where questions about how to optimize PHP are criticized with various versions of "it's not worth going to great lengths to optimize PHP since it's an interpreted language and it won't make that much difference". I have also heard various discussions (especiallon on the SO podcast) about the benefits of compiled vs. interpreted languages, and it seems as though it would be in my interests to use a compiled language to serve up the site instead of an interpreted language. Is this even possible in a web context? If so, what would be a reasonable language choice? In addition to speed one benefit I forsee is the possiblity of finding bugs at compile time instead of having to debug the web site. Is this reasonable to expect?

    Read the article

  • Upcoming Webcast on June 17: Gain Control Over Your Financial Close

    - by Theresa Hickman
    Accenture and Oracle EPM (Enterprise Perfromance Management) and GRC (Governance, Risk, and Compliance) will be hosting a live webcast called "Gain Control Over Your Financial Close - Confidence in the Process, Trust in the Numbers." When: Thursday, June 17, 2010 Time: 9:00am PST (Noon EST) Don't miss this chance to find out how you could optimize the financial close process and transform the speed, quality and integrity of your financial reporting. For more information and to register for this event, see this webpage.

    Read the article

  • Enterprise Cloud Computing: Risk and Economics

    Cloud computing can help optimize a company's capital investments by reducing its costs for hardware, software and real estate, resulting in a much lower total cost of ownership and, ultimately, a whole new way of looking at the economics of operational IT.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >