Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 97/1148 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • How do I construct a Django reverse/url using query args?

    - by Andrew Dalke
    I have URLs like http://example.com/depict?smiles=CO&width=200&height=200 (and with several other optional arguments) My urls.py contains: urlpatterns = patterns('', (r'^$', 'cansmi.index'), (r'^cansmi$', 'cansmi.cansmi'), url(r'^depict$', cyclops.django.depict, name="cyclops-depict"), I can go to that URL and get the 200x200 PNG that was constructed, so I know that part works. In my template from the "cansmi.cansmi" response I want to construct a URL for the named template "cyclops-depict" given some query parameters. I thought I could do {% url cyclops-depict smiles=input_smiles width=200 height=200 %} where "input_smiles" is an input to the template via a form submission. In this case it's the string "CO" and I thought it would create a URL like the one at top. This template fails with a TemplateSyntaxError: Caught an exception while rendering: Reverse for 'cyclops-depict' with arguments '()' and keyword arguments '{'smiles': u'CO', 'height': 200, 'width': 200}' not found. This is a rather common error message both here on StackOverflow and elsewhere. In every case I found, people were using them with parameters in the URL path regexp, which is not the case I have where the parameters go into the query. That means I'm doing it wrong. How do I do it right? That is, I want to construct the full URL, including path and query parameters, using something in the template. For reference, % python manage.py shell Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django.core.urlresolvers import reverse >>> reverse("cyclops-depict", kwargs=dict()) '/depict' >>> reverse("cyclops-depict", kwargs=dict(smiles="CO")) Traceback (most recent call last): File "<console>", line 1, in <module> File "/Library/Python/2.6/site-packages/django/core/urlresolvers.py", line 356, in reverse *args, **kwargs))) File "/Library/Python/2.6/site-packages/django/core/urlresolvers.py", line 302, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'cyclops-depict' with arguments '()' and keyword arguments '{'smiles': 'CO'}' not found.

    Read the article

  • How to add a column via a query which counts the total rows with a specific criteria in a table with circular relationship in MS ACCESS 2007

    - by Xaqron
    I have a simple table "Employees" with this fields: ID, ParentID, Name ParentID is Nullable since an employee may have no Manager. This table has a one-to-many relationship with itself: ID --one--to--many--> ParentID Now I want a query which returns this columns: Name, Count of rows where their ParentID equals to the current row ID (the row is the manager of that rows) Sample Table: ID | ParentID | Name ====================== 1 | 0 | John ---------------------- 2 | 1 | Bob ---------------------- 3 | 1 | Alice ---------------------- 4 | 3 | Jack This way I can find an employee is the manager of how many other employees. The result should be something like this: Name | Count of Employees ========================== John | 2 -------------- Bob | 0 -------------- Alice | 1 -------------- Jack | 0 How can I achieve this in MS ACCESS 2007? * I have tried built-in query builder without any success.

    Read the article

  • Is there a way to optimize this update query?

    - by SchlaWiener
    I have a master table called "parent" and a related table called "childs" Now I run a query against the master table to update some values with the sum from the child table like this. UPDATE master m SET quantity1 = (SELECT SUM(quantity1) FROM childs c WHERE c.master_id = m.id), quantity2 = (SELECT SUM(quantity2) FROM childs c WHERE c.master_id = m.id), count = (SELECT COUNT(*) FROM childs c WHERE c.master_id = m.id) WHERE master_id = 666; Which works as expected but is not a good style because I basically make multiple SELECT querys on the same result. Is there a way to optimize that? (Making a query first and storing the values is not an option. I tried this: UPDATE master m SET (quantity1, quantity2, count) = ( SELECT SUM(quantity1), SUM(quantity2), COUNT(*) FROM childs c WHERE c.master_id = m.id ) WHERE master_id = 666; but that doesn't work.

    Read the article

  • How can I build my SQL query from these tables?

    - by vee
    Hi All, I'm thinking of building query from these 2 tables (on SQL Server 2008). I have 2 tables as shown below: Table 1 MemberId . MemberName . Percentage . Amount1 00000001 AAA 1.0 100 00000002 BBB 1.2 800 00000003 ZZZ 1.0 700 Table 2 MemberId . MemberName . Percentage . Amount2 00000002 BBB 1.5 500 00000002 BBB 1.6 100 00000002 BBB 1.6 150 The result I want is MemberId . MemberName . Percentage . Amount . NettAmount 00000001 AAA 1.0 100 100 00000002 BBB 1.2 800 50 <-- 800-(500+100+150) 00000002 BBB 1.5 500 500 00000002 BBB 1.6 650 650 00000003 ZZZ 1.0 700 700 50 comes from 800 in Table1 minus sum of Amount2 in table2 for MemberID=00000002 Plz someone help me to build the query to reach this result. Thank you in advance.

    Read the article

  • How do I increase Relevance value in an advanced MySQL query?

    - by morgant
    I've got a MySQL query similar to the following: SELECT *, MATCH (`Description`) AGAINST ('+ipod +touch ' IN BOOLEAN MODE) * 8 + MATCH(`Description`) AGAINST ('ipod touch' IN BOOLEAN MODE) AS Relevance FROM products WHERE ( MATCH (`Description`) AGAINST ('+ipod +touch' IN BOOLEAN MODE) OR MATCH(`LongDescription`) AGAINST ('+ipod +touch' IN BOOLEAN MODE) ) HAVING Relevance > 1 ORDER BY Relevance DESC Now, I've made the query more advanced by also searching for UPC: SELECT *, MATCH (`Description`) AGAINST ('+ipod +touch ' IN BOOLEAN MODE) * 8 + MATCH(`Description`) AGAINST ('ipod touch' IN BOOLEAN MODE) + `UPC` = '123456789012' * 16 AS Relevance FROM products WHERE ( MATCH (`Description`) AGAINST ('+ipod +touch' IN BOOLEAN MODE) OR MATCH(`LongDescription`) AGAINST ('+ipod +touch' IN BOOLEAN MODE) ) AND `UPC` = '123456789012' HAVING Relevance > 1 ORDER BY Relevance DESC That'll return results, but the fact that I had a successful match on the UPC does not increase the value of Relevance. Can I only do that kind of calculation w/full text searches like MATCH() AGAINST()? Clarification: Okay, so my real question is, why does the following not have a Relevance = 16? SELECT `UPC`, `UPC` = '123456789012' * 16 AS Relevance FROM products WHERE `UPC` = '123456789012' HAVING Relevance > 1 ORDER BY Relevance DESC

    Read the article

  • What's the best way to measure and track performance over various calls at runtime?

    - by bitcruncher
    Hello. I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime? Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time? Many thanks. Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?

    Read the article

  • Are there ways to improve NHibernate's performance regarding entity instantiation?

    - by denny_ch
    Hi folks, while profiling NHibernate with NHProf I noticed that a lot of time is spend for entity building or at least spend outside the query duration (database roundtrip). The project I'm currently working on prefetches some static data (which goes into the 2nd level cache) at application start. There are about 3000 rows in the result set (and maybe 30 columns) that is queried in 75 ms. The overall duration observed by NHProf is about 13 SECONDS! Is this typical beheviour? I know that NHibernate shouldn't be used for bulk operations, but I didn't thought that entity instantiation would be so expensive. Are there ways to improve performance in such situations or do I have to live with it? Thx, denny_ch

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • Having all Views in the Shared folder - works but is throwing "caught exceptions". Performance conc

    - by Scott
    Hi everyone, I have a simple but heavily used app done in VS2010/MVC2. I didn't like having separate folders for each view/controller and so have all the views in the Shared folder. It's working fine but while debugging in VS, I noticed that it's throwing IO "caught exceptions" since it seems to be looking in the [FolderName]/[ViewName] folder before going down to the Shared folder. Again, the app runs fine but I'm concerned that all these "caught exceptions" will have a minor performance impact since they do have a cost in via the CLR. Is there any way I can configure the Routing so that it will only look in the Shared folder? Thanks.

    Read the article

  • When using Query Syntax in C# "Enumeration yielded no results". How to retrieve output

    - by Shantanu Gupta
    I have created this query to fetch some result from database. Here is my table structure. What exaclty is happening. DtMapGuestDepartment as Table 1 DtDepartment as Table 2 Are being used var dept_list= from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select dept.Field<string>("DEPARTMENT_ID"); I am performing this query on DataTables and expect it to return me a datatable. Here I want to select distinct department from Table 1 as well which will be my next quest. Please answer to that also if possible.

    Read the article

  • How to test the performance of a user's PC in/for Flash?

    - by Jan P.
    Hey, I'm a developer on nice space MMO using Flash. On new PCs performance is quite good, but some features shouldn't be enabled on older PCs because the framerate drops to shit if we do. Flash wasn't made for this, but hey, pushing boundaries is fun. An example is fullscreen mode. Of course every user can manually enable it, but "advertising" it to a user with and oldie PC would be a bad idea - but for the Alienware crowd it would be dumb not to. So I want to find out how "capable" a user's PC is to decide if I should enable or disable some features for him. Any ideas? Thanks, Sujan

    Read the article

  • How can I write a MySQL query to check multiple rows?

    - by Matt
    I have a MySQL table containing data on product features: feature_id feature_product_id feature_finder_id feature_text feature_status_yn 1 1 1 Webcam y 2 1 1 Speakers y 3 1 1 Bluray n I want to write a MySQL query that allows me to search for all products that have a 'y' feature_status_yn value for a given feature_product_id and return the feature_product_id. The aim is to use this as a search tool to allow me to filter results to product IDs only matching the requested feature set. A query of SELECT feature_id FROM product_features WHERE feature_finder_id = '1' AND feature_status_yn = 'y' will return all of the features of a given product. But how can I select all products (feature_product_id) that have a 'y' value when they are on separate lines? Multiple queries might be one way to do it, but I'm wondering whether there's a more elegant solution based purely in SQL.

    Read the article

  • When to trash hashmap contents to avoid performance degradation?

    - by Jack
    Hello, I'm woking on Java with a large (millions) hashmap that is actually built with a capacity of 10.000.000 and a load factor of .75 and it's used to cache some values since cached values become useless with time (not accessed anymore) but I can't remove useless ones while on the way I would like to entirely empty the cache when its performance starts to degrade. How can I decide when it's good to do it? For example, with 10 millions capacity and .75 should I empty it when it reaches 7.5 millions of elements? Because I tried various threshold values but I would like to have an analytic one. I've already tested the fact that emping it when it's quite full is a boost for perfomance (first 2-3 algorithm iterations after the wipe just fill it back, then it starts running faster than before the wipe) Thanks

    Read the article

  • Is there a linear-time performance guarantee with using an Iterator?

    - by polygenelubricants
    If all that you're doing is a simple one-pass iteration (i.e. only hasNext() and next(), no remove()), are you guaranteed linear time performance and/or amortized constant cost per operation? Is this specified in the Iterator contract anywhere? Are there data structures/Java Collection which cannot be iterated in linear time? java.util.Scanner implements Iterator<String>. A Scanner is hardly a data structure (e.g. remove() makes absolutely no sense). Is this considered a design blunder? Is something like PrimeGenerator implements Iterator<Integer> considered bad design, or is this exactly what Iterator is for? (hasNext() always returns true, next() computes the next number on demand, remove() makes no sense). Similarly, would it have made sense for java.util.Random implements Iterator<Double>?

    Read the article

  • What are the performance characteristics of SignalR at scale?

    - by Joel Martinez
    I'm interested in the performance characteristics of SignalR at scale ... particularly, how it behaves at the fringes of capability. When a server is at capacity, what happens? Does it drop messages? Do some clients not get notified? Are messages queued until all are delivered? And if so, will the queue eventually overflow and crash the server? I ask because conducting such a test myself would be impractical, and I'm hoping someone could point me to documentation speaking to this ... or perhaps someone could comment that has seen how SignalR behaves at scale. Thanks! note: I'm familiar with this other stackoverflow question on the stability and scalability of SignalR. But I believe my question is asking a slightly different question in that I'm not concerned with the theoretical scaling limits, I want to know how it behaves when it reaches the limits ... so I know what to be on the lookout for.

    Read the article

  • Is there a library / tool to query MySQL data files (MyISAM / InnoDB) without the server? (the SQLit

    - by MGW
    Oftentimes I want to query my MySQL data directly without a server running or without having access to the server (but having read / write rights to the files). Is there a tool or maybe even a library around to query MySQL data files like it is possible with SQLite? I'm specifically looking for InnoDB and MyISAM support. Performance is not a factor. I don't have any knowledge about MySQL internals, but I presume it should be possible to do and not too hard to get the specific code out? Thank you for any suggestions!

    Read the article

  • Why do debug symbols so adversely affect the performance of threaded applications on Linux?

    - by fluffels
    Hi. I'm writing a ray tracer. Recently, I added threading to the program to exploit the additional cores on my i5 Quad Core. In a weird turn of events the debug version of the application is now running slower, but the optimized build is running faster than before I added threading. I'm passing the "-g -pg" flags to gcc for the debug build and the "-O3" flag for the optimized build. Host system: Ubuntu Linux 10.4 AMD64. I know that debug symbols add significant overhead to the program, but the relative performance has always been maintained. I.e. a faster algorithm will always run faster in both debug and optimization builds. Any idea why I'm seeing this behavior?

    Read the article

  • Performance Impact of Generating 100's of Dynamic Methods in Ruby?

    - by viatropos
    What are the performance issues associated with generating 100's of dynamic methods in Ruby? I've been interested in using the Ruby Preferences Gem and noticed that it generates a bunch of helper methods for each preference you set. For instance: class User < ActiveRecord::Base preference :hot_salsa end ...generates something like: user.prefers_hot_salsa? # => false user.prefers_hot_salsa # => false If there are 100's of preferences like this, how does this impact the application? I assume it's not really a big deal but I'm just wondering, theoretically.

    Read the article

  • If I take a large datatype. Will it affect performance in sql server

    - by Shantanu Gupta
    If i takes larger datatype where i know i should have taken datatype that was sufficient for possible values that i will insert into a table will affect any performance in sql server in terms of speed or any other way. eg. IsActive (0,1,2,3) not more than 3 in any case. I know i must take tinyint but due to some reasons consider it as compulsion, i am taking every numeric field as bigint and every character field as nVarchar(Max) Please give statistics if possible, to let me try to overcoming that compulsion. I need some solid analysis that can really make someone rethink before taking any datatype.

    Read the article

  • Does INNER JOIN performance depends on order of tables?

    - by Kartic
    A question suddenly came to my mind while I was tuning one stored procedure. Let me ask it - I have two tables, table1 and table2. table1 contains huge data and table2 contains less data. Is there performance-wise any difference between these two queries(I am changing order of the tables)? Query1: SELECT t1.col1, t2.col2 FROM table1 t1 INNER JOIN table2 t2 ON t1.col1=t2.col2 Query2: SELECT t1.col1, t2.col2 FROM table2 t2 INNER JOIN table1 t1 ON t1.col1=t2.col2 We are using Microsoft SQL server 2005.

    Read the article

  • Improving the performance of an nHibernate Data Access Layer.

    - by Amitabh
    I am working on improving the performance of DataAccess Layer of an existing Asp.Net Web Application. The scenerios are. Its a web based application in Asp.Net. DataAccess layer is built using NHibernate 1.2 and exposed as WCF Service. The Entity class is marked with DataContract. Lazy loading is not used and because of the eager-fetching of the relations there is huge no of database objects are loaded in the memory. No of hits to the database is also high. For example I profiled the application using NHProfiler and there were about 50+ sql calls to load one of the Entity object using the primary key. I also can not change code much as its an existing live application with no NUnit test cases at all. Please can I get some suggestions here?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >