Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 69/537 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • The clock problem - to if or not to if?

    - by trejder
    Let's say, we have a simple digital clock. To "power" it, we use a routine executed every second. We update seconds part in it. But, what about minutes and hours part? What is better / more professional / offers better performance: Ignore all checking and update hour, minute and seconds part each time, every second. Use if + a variable for checking, if 60 (or 3600) seconds passed and update minute / hour part only at that precise moments. This leads us to a question, what is better -- unnecessary drawings (first approach) or extra ifs? I've just spotted a Javascript digital clock, one of millions similar on one of billions pages. And I noticed that all three parts (hours, minutes and seconds) are updated every second, though first changes its value only once per 3600 seconds and second once per 60 seconds. I'm not to experienced developer, so I might me wrong. But everything, what I've learnt up until now, tells me, that if are far better then executing drawing / refreshing sequences only to draw the same content.

    Read the article

  • Why is rvalue write in shared memory array serialised?

    - by CJM
    I'm using CUDA 4.0 on a GPU with computing capability 2.1. One of my device functions is the following: device void test(int n, int* itemp) // itemp is shared memory pointer { const int tid = threadIdx.x; const int bdim = blockDim.x; int i, j, k; bool flag = 0; itemp[tid] = 0; for(i=tid; i<n; i+=bdim) { // { code that produces some values of "flag" } } itemp[tid] = flag; } Each thread is checking some conditions and producing a 0/1 flag. Then each thread is writing flag at the tid-th location of a shared int array. The write statement "itemp[tid] = flag;" gets serialized -- though "itemp[tid] = 0;" is not. This is causing huge performance lag which technically should not be there -- I want to avoid it. Please help.

    Read the article

  • My first blog post…

    - by steveh99999
    I’ve been meaning to start a blog for a while now, (OK, for several years…..) - finally now, here it begins First post, something really simple but, a wise-man once told me about the best way to improve SQL server performance. Store Less Data. That's it.. that's all there is to it... Over the years, I've seen the following :- -  a 200Gb database which held 3 days data. Once business requirements changed, we were able to hold only 1 days data in this database. -  a table developed by DBAs to hold application table cardinality information - that information was collected at 2 hour intervals every day for 7 years ! After 7 years the DBA space-info table had become the largest table in the database - 60 million rows !  It was a simple change to remove alot of the historical intra-day data and change the schedule to run only once per evening. Suddenly that table held 6 million rows instead of 60 million.... - lots of backup and restore history held in msdb. See this post by Brent Ozar for more details on this issue. Imagine how much faster the backups, DBCC Checks and reindexes ran when the above 3 changes were implemented ?   How often do you review your big databases \ tables to see if you’re actually holding only data that is really required by the business ?

    Read the article

  • Cuda driver, CPU/GPU performances issue

    - by elect
    I implemented a RNS Montgomery exponentiation in Cuda and on cpu for comparison. Everything nice everything fine. It runs on just one SM. However I am going to tell you some strange regression in both cpu/gpu performances. During the devoloping, about two month ago, I was using Cuda 5 preview on Ubuntu 11.04 64b. In this time, I reach the following performances: cpu 460ms gpu 120ms Then one day when I turn on the pc, the graphical environment didnt start. I dont know which was the problem, however I switched to the console and installed again the Cuda driver. At the following boot performances changed: cpu 310ms gpu 80ms I was like Q.Q...uhm ok, nice to see this, but I was wondering how that could be possible However, I went then in holiday for 10 days and I continued developing and optimizing on my notebook (but not the same part of the code, some additional stuff) When I was back, I just updated the source files, and performances came back to 460/120ms.. I couldnt believe it, I tried to install Cuda 5 RC, updating the video driver too... nothing changed... I checked Debug/Release, Cuda computability, but the problem seems being somewhere else.. Looking around the net I found this, I am pretty sure it must have something to do with the driver, because the performance change affected both cpu and gpu Do you have some tips/ideas/suggestions?

    Read the article

  • Why is quicksort better than other sorting algorithms in practice?

    - by Raphael
    This is a repost of a question on cs.SE by Janoma. Full credits and spoils to him or cs.SE. In a standard algorithms course we are taught that quicksort is O(n log n) on average and O(n²) in the worst case. At the same time, other sorting algorithms are studied which are O(n log n) in the worst case (like mergesort and heapsort), and even linear time in the best case (like bubblesort) but with some additional needs of memory. After a quick glance at some more running times it is natural to say that quicksort should not be as efficient as others. Also, consider that students learn in basic programming courses that recursion is not really good in general because it could use too much memory, etc. Therefore (and even though this is not a real argument), this gives the idea that quicksort might not be really good because it is a recursive algorithm. Why, then, does quicksort outperform other sorting algorithms in practice? Does it have to do with the structure of real-world data? Does it have to do with the way memory works in computers? I know that some memories are way faster than others, but I don't know if that's the real reason for this counter-intuitive performance (when compared to theoretical estimates).

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • Java single Array best choice for accessing pixels for manipulation?

    - by Petrol
    I am just watching this tutorial https://www.youtube.com/watch?v=HwUnMy_pR6A and the guy (who seems to be pretty competent) is using a single array to store and access the pixels of his to-be-rendered image. I was wondering if this really is the best way to do this. The alternative of Multi-Array does have one pointer more, but Arrays do have an O(1) for accessing each index and calculating the index in a single array seems to take one addition and one multiplication operation per pixel. And if Multi-Arrays really are bad, can't you use something with Hashing to avoid those addition and multiplication operations? EDIT: here is his code... public class Screen { private int width, height; public int[] pixels; public Screen(int width, int height) { this.width = width; this.height = height; // creating array the size of one index/int for every pixel // single array has better performance than multi-array pixels = new int[width * height]; } public void render() { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { pixels[x + y * width] = 0xff00ff; } } } }

    Read the article

  • What is the Relative Performance of Pseudo-Class and Custom Selectors?

    - by James Wiseman
    It's my understanding that, in terms of selector speed, that #ID selectors are fastest, followed by element selectors, and then .class selectors. I have always assumed that pseudo-class selectors and custom selectors (those in the form ':selector') are similar to .class selectors, but I realised that I'm just not sure. I realise that this does depend on the complexity of the code within the pseudo-class/custom selector, so I guess I'd like to know the answer with this excluded as factor. Any help would be appreciated. Thanks.

    Read the article

  • Ever any performance different between Java >> and >>> right shift operators?

    - by Sean Owen
    Is there ever reason to think the (signed) and (unsigned) right bit-shift operators in Java would perform differently? I can't detect any difference on my machine. This is purely an academic question; it's never going to be the bottleneck I'm sure. I know: it's best to write what you mean foremost; use for division by 2, for example. I assume it comes down to which architectures have which operations implemented as an instruction.

    Read the article

  • How best to pre-install OR pre-load OR cache Java Script library to optimize performance.

    - by Kabeer
    Hello. I am working for an intranet application. Therefore I have some control on the client machines. The Java Script library I am using is somewhat big in size. I would like to pre-install OR pre-load OR cache the Java Script library on each machine (each browser as well) so that it does not travel for each request. I know that browsers do cache a Java Script library for subsequent requests but I would like the library to be cached once so all subsequent requests, sessions and users. What is the best mechanism to achieve this?

    Read the article

  • How best to pre-install OR pre-load OR cache JavaScript library to optimize performance?

    - by Kabeer
    Hello. I am working for an intranet application. Therefore I have some control on the client machines. The JavaScript library I am using is somewhat big in size. I would like to pre-install OR pre-load OR cache the JavaScript library on each machine (each browser as well) so that it does not travel for each request. I know that browsers do cache a JavaScript library for subsequent requests but I would like the library to be cached once for all subsequent requests, sessions and users. What is the best mechanism to achieve this?

    Read the article

  • Why does Microsoft Windows' performance appear to degrade over time?

    - by Ben Aston
    Windows XP/2k3 and earlier (can't attest to Vista, but suspect it's the same) all appear to become more sluggish over time as applications are installed and uninstalled. This is not a scientifically tested observation, but more of a learned-through-experience piece of wisdom. (I've always suspected the registry as being behind the issue.) Does anyone have any concrete evidence of this degradation occurring, or it just an invalid perception of mine?

    Read the article

  • Is there a way to increase performance on my simple textfilter?

    - by djerry
    Hey guys, I'm writing a filter that will pick out items. I have a list of Objects. The objects contain a number, name and some other irrelevant items. At the moment, the list contains 200 items. When typing in a textbox, i'm looking if the string matches a part of the number/name of the objects in the list. If so, add them to the listbox. Here's the code for my textbox textchanged event : private void txtTelnumber_TextChanged(object sender, TextChangedEventArgs e) { lstOverview.Items.Clear(); string data = ""; foreach (ucTelListItem telList in _allUsers) { data = telList.User.H323 + telList.user.E164; if (data.Contains(txtTelnumber.Text)) lstOverview.Items.Add(telList); } } I sometimes see a little delay when entering a character, especially when i go from 4 records to 200 records (so when i had a filter and 4 records matched, and i backspace and the whole list appears again). My list is a list of usercontrols, cause i found it takes less time to load the usercontrols from a list, then to have to initialize a new usercontrol each time. Can i do something about the code, or is it just adding the usercontrol the listbox that causes the small delay (small delay = <1 sec)? Thanks in advance.

    Read the article

  • Android Performance Question : Many small apps or one big app?

    - by kunjaan
    I read this quote in one of the webpages: If you are writing a large application, consider dividing it into a suite of applications and services. Smaller applications load faster and use fewer resources. Making a suite of applications, content providers, and services makes your code more open to incorporation into other applications as described the "Use and be used" tip. Is this true? What is the thumb rule for the size of app?

    Read the article

  • How can I improve the performance of this double-for print?

    - by Florenc
    I have the following static method that prints the data imported from a 40.000 lines .xls spreadsheet. Now, it takes about 27 seconds to print the data in the console and the memory consumption is huge. import org.apache.poi.hssf.usermodel.*; import org.apache.poi.ss.usermodel.*; public static void printSheetData(List<List<HSSFCell>> sheetData) { for (int i = 0; i < sheetData.size(); i++) { List<HSSFCell> list = (List<HSSFCell>) sheetData.get(i); for (int j = 0; j < list.size(); j++) { HSSFCell cell = (HSSFCell) list.get(j); System.out.print(cell.toString()); if (j < list.size() - 1) { System.out.print(", "); } } System.out.println(""); } } Disclaimer: I know, I know large data belong to a database, don't print output in the console, premature optimization is the root of all evils...

    Read the article

  • Does clustered index on foreign key column increase join performance vs non-clustered ?

    - by alpav
    In many places it's recommended that clustered indexes are better utilized when used to select range of rows using BETWEEN statement. When I select joining by foreign key field in such a way that this clustered index is used, I guess, that clusterization should help too because range of rows is being selected even though they all have same clustered key value and BETWEEN is not used. Considering that I care only about that one select with join and nothing else, am I wrong with my guess ?

    Read the article

  • Performance, serve all CSS at once, or as its needed?

    - by yaya3
    As far as I know, these days there are two main techniques used for including CSS in a website. A) Provide all the CSS used by the website in one (compressed) file B) Provide the CSS for required by the elements on the page that is currently being viewed only Positives for A: The entire CSS used on the site is cached on first visit via 1 http request Negatives for A: if it's a big file, it will take a long time to load initially Positives for B: Faster initial load time Negatives for B: More HTTP requests, more files to cache Is there anything (fundamental) that I am missing here?

    Read the article

  • How can I Monitor the Performance of Individual Apps on Windows?

    My XP machine has become terribly slow and I want to identify the application at fault. It seems to be related to disk access rather than processor hogging. I can look at the task manager to get a good idea but it's not ideal. I was wondering if there was some application that can monitor all aspects of processes effectively. Is Process Explorer my only hope?

    Read the article

  • Which .NET performance and/or memory profilers will allow me to profile a DLL?

    - by Eric
    I write a lot of .NET based plug-ins for other programs which are usually compiled as a DLL which is up to the native application to start up. I've been using Equatec's profiler, which works great, but now would like something with more features, including the ability to profile memory usage. I tried out Red Gate's Ant Profiler, but as far as I can see there is no way to profile a DLL. The only option is to profile an EXE. So my question is what other profiling tools are available that will allow me to profile a single library DLL rather than an EXE. I'm assuming this would require injecting profile code into the library as Equatec does?

    Read the article

  • What performance indicators can I use to convince management that I need my development PC upgraded?

    - by Aaron Daniels
    At work, my PC is slow. I feel that I can be way more productive if I just wasn't waiting for Visual Studio and everything else to respond. My PC isn't bad (dual-core, 3GB of RAM), but there is a lot of corporate software and whatnot to slow everything down and sometimes lock it up. Now, some developers have begun getting Windows 7 machines with 8 GB of RAM. Of course, I start salivating at this. However, I was told that I "had to justify" why I should get a new machine. I can think of a lot of different things, but I am curious as to what every one else on SO would have to say. NOTE: Ideally, these reasons should be specifically related to .NET development in Visual Studio on a Windows machine. This isn't a "how can I make my machine faster" question.

    Read the article

  • Tips for improving performance of DB that is above size 40 GB (Sql Server 2005) and growing monthly

    - by HotTester
    The current DB or our project has crossed over 40 GB this month and on an average it is growing monthly by around 3 GB. Now all the tables are best normalized and proper indexing has been used. But still as the size is growing it is taking more time to fire even basic queries like 'select count(1) from table'. So can u share some more points that will help in this front. Database is Sql Server 2005. Further if we implement Partitioning wouldn't it create a overhead ? Thanks in advance.

    Read the article

  • Is there a performance gain from defining routes in app.yaml versus one large mapping in a WSGIAppli

    - by jgeewax
    Scenario 1 This involves using one "gateway" route in app.yaml and then choosing the RequestHandler in the WSGIApplication. app.yaml - url: /.* script: main.py main.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page1/', Page1), ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Scenario 2: This involves defining two routes in app.yaml and then two separate scripts for each (page1.py and page2.py). app.yaml - url: /page1/ script: page1.py - url: /page2/ script: page2.py page1.py from google.appengine.ext import webapp class Page1(webapp.RequestHandler): def get(self): self.response.out.write("Page 1") application = webapp.WSGIApplication([ ('/page1/', Page1), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() page2.py from google.appengine.ext import webapp class Page2(webapp.RequestHandler): def get(self): self.response.out.write("Page 2") application = webapp.WSGIApplication([ ('/page2/', Page2), ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() Question What are the benefits and drawbacks of each pattern? Is one much faster than the other?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >