Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 165/527 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Objective - C, fastest way to show sequence of images in UIImageView

    - by Almas Adilbek
    I have hundreds of images, which are frame images of one animation (24 images per second). Each image size is 1024x690. My problem is, I need to make smooth animation iterating each image frame in UIImageView. I know I can use animationImages of UIImageView. But it crashes, because of memory problem. Also, I can use imageView.image = [UIImage imageNamed:@""] that would cache each image, so that the next repeat animation will be smooth. But, caching a lot of images crashed app. Now I use imageView.image = [UIImage imageWithContentsOfFile:@""], which does not crash app, but doesn't make animation so smooth. Maybe there is a better way to make good animation of frame images? Maybe I need to make some preparations, in order to somehow achieve better result. I need your advices. Thank you!

    Read the article

  • Can I optimize this at all?

    - by Moshe
    I'm working on an iOS app and I'm using the following code for one of my tables to return the number of rows in a particular section: return [[kSettings arrayForKey:@"views"] count]; Is there any other way to write that line of code so that it is more memory efficient? EDIT: kSettings = NSUserDefaults standardUserDefaults. Is there any way to rewrite my line of code so that whatever memory it occupies is released sooner than it is released now?

    Read the article

  • What to have in mind when building a AJAX-based webapp

    - by Industrial
    Hi everyone, We're in the first steps of what will be a AJAX-based webapp where information and generated HTML will be sent backwards and forwards with the help of JSON/POST techniques. We're able to get the data out quickly without putting to much load on the database with the help of a cache-layer that features memcached as well as disc-based cache. Besides that - what's essential to have in mind when designing AJAX heavy webapps? Thanks a lot,

    Read the article

  • General question: Filesystem or database?

    - by poeschlorn
    Hey guys, i want to create a small document management system. there are several users who store their files. each file which is uploaded contains an info which user uploaded it and the document content itself. In a view there are displayed all files of ONE specific user, ordered by date. What would be better: 1) giving the documents a name or metadata(XML) which contain the date and user (and iterate through them to get the metadata) or 2) giving the files a random/unique name and store metadata in a DB? something like this: date | user | filename What would you say and why? The used programming language is java and the DB is MySQL.

    Read the article

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • Fastest way to do a weighted tag search in SQL Server

    - by Hasan Khan
    My table is as follows ObjectID bigint Tag nvarchar(50) Weight float Type tinyint I want to get search for all objects that has tags 'big' or 'large' I want the objectid in order of sum of weights (so objects having both the tags will be on top) select objectid, row_number() over (order by sum(weight) desc) as rowid from tags where tag in ('big', 'large') and type=0 group by objectid the reason for row_number() is that i want paging over results. The query in its current form is very slow, takes a minute to execute over 16 million tags. What should I do to make it faster? I have a non clustered index (objectid, tag, type) Any suggestions?

    Read the article

  • SQL Profiler and Tuning Advisor for Reporting Services - what events should be selected?

    - by chris
    I've used the SQL Profiler to generate a trace file, and tuning advisor to take that trace file and provide some recommendations on db updates. However, the SQL Profiler doesn't seem to track the queries when running against a Reporting Server, the profiler doesn't seem to be capturing any of the queries. I'm logging the defaults (SQL:BatchCompleted and Starting, RPC:completed, and Sessions - Existing Connections) What events should I be capturing in SQL Profiler in order to run the tuning advisor?

    Read the article

  • Fastest Way to generate 1,000,000+ random numbers in python

    - by Sandro
    I am currently writing an app in python that needs to generate large amount of random numbers, FAST. Currently I have a scheme going that uses numpy to generate all of the numbers in a giant batch (about ~500,000 at a time). While this seems to be faster than python's implementation. I still need it to go faster. Any ideas? I'm open to writing it in C and embedding it in the program or doing w/e it takes. Constraints on the random numbers: A Set of numbers 7 numbers that can all have different bounds: eg: [0-X1, 0-X2, 0-X3, 0-X4, 0-X5, 0-X6, 0-X7] Currently I am generating a list of 7 numbers with random values from [0-1) then multiplying by [X1..X7] A Set of 13 numbers that all add up to 1 Currently just generating 13 numbers then dividing by their sum Any ideas? Would pre calculating these numbers and storing them in a file make this faster? Thanks!

    Read the article

  • Profilers for ASP.Net Web Applications?

    - by Earlz
    I was recently wanting to do some profiling on an ASP.Net project and was surprised to see that Visual Studio (at least seems to be) lacking a profiler. So my question is what profiler do you use for ASP.Net? Are there any decent ones out there that are free? I've seen a few general .Net profilers but have yet to see one that can be used with ASP.Net..

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • Is it better to use GL_FIXED or GL_FLOAT on Android.

    - by Timmmm
    I would have assumed that GL_FIXED was faster, but the iPhone docs actually say to use GL_FLOAT because GL_FIXED has to be converted to GL_FLOAT. Is it the same on Android? I suppose it varies by phone, but what about recent popular ones (Nexus One, Droid/Milestone, etc.)? Bonus points: This appears to be completely undocumented (e.g. search google for GL_FIXED!) but where is the 'point' in GL_FIXED? I.e. how much is (GL_FIXED)1 worth?

    Read the article

  • resort on a std::vector vs std::insert

    - by Abruzzo Forte e Gentile
    I have a sorted std::vector of relative small size ( from 5 to 20 elements ). I used std::vector since the data is continuous so I have speed because of cache. On a specific point I need to remove an element from this vector. I have now a doubt: which is the fastest way to remove this value between the 2 options below? setting that element to 0 and call sort to reorder: this has complexity but elements are on the same cache line. call erase that will copy ( or memcpy who knows?? ) all elements after it of 1 place ( I need to investigate the behind scense of erase ). Do you know which one is faster? I think that the same approach could be thought about inserting a new element without hitting the max capacity of the vector. Regards AFG

    Read the article

  • apache alias and .htacess willing to understand configuration?

    - by sushil bharwani
    On our local dev enviornment we had just one server and to add far future expires and cache control header to static images we kept a .htaccess file in the root of the application things worked fine. But on our prod we have multiple apache servers having aliases to a code base on a different server. Here in this case i am not sure where to keep .htacess file on. Should i be keeping it on code base or on the individual apache servers. How can i write the same stuff that i have written in .htaccess file to httpd.conf file.

    Read the article

  • Treeview Slow in IE?!?!

    - by Mike
    I have a treeview with around 200 records that needs to be fully expanded at all times (so no loading on demand). It is inside of an update panel with the updatemode set to conditional. There are other update panels on the page as well that are set to conditional. Depending on user actions the tree may need to be rebuilt by calling databind and updating the updatepanel. Everything works fine in firefox, longest postback about 2 seconds. With IE I have to wait up to 30 seconds sometimes and the action may have nothing to do with the tree just changing a dropdown in its own updatepanel takes forever. I have considered the size of viewstate and just raw HTML generated may be causing the delay but wouldn't that effect both browsers? Anyone have anyideas what is making it so slow in IE??? Thanks!

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • rails belongs_to sql statement using NULL id

    - by Team Pannous
    When paginating through our Phrase table it takes very long to return the results. In the sql logs we see many sql requests which don't make sense to us: Phrase Load (7.4ms) SELECT "phrases".* FROM "phrases" WHERE "phrases"."id" IS NULL LIMIT 1 User Load (0.4ms) SELECT "users".* FROM "users" WHERE "users"."id" IS NULL LIMIT 1 These add up significantly. Is there a way to prevent querying against null ids? This is the underlying model: class Phrase < ActiveRecord::Base belongs_to :user belongs_to :response, :class_name => "Phrase", :foreign_key => "next_id" end

    Read the article

  • Session Timeout and page response time

    - by Johnny5
    Hi, I'm load testing an asp.net app. The load test is simulating 500 user doing searchs on the site and browsing the results. I'm observing that the more I reduce the session timeout limit (in web.config) the better the page response time. For exemple, with a timeout at 10 minutes, I got an average response time of 8.35 seconds. With a timout at 3 minutes, the average response time for the same page is 3,98 seconds. The session in stored "InProc". I supposed the memory used by the "no more used but still actives" sessions may be in cause. But, even if there is more memory used when the timeout is at 10, there is still plenty of memory available (about 2.7Gb). Any ideas?

    Read the article

  • Does having a longer string in a SQL Like expression allow hinder or help query executing speed?

    - by Allain Lalonde
    I have a db query that'll cause a full table scan using a like clause and came upon a question I was curious about... Which of the following should run faster in Mysql or would they both run at the same speed? Benchmarking might answer it in my case, but I'd like to know the why of the answer. The column being filtered contains a couple thousand characters if that's important. SELECT * FROM users WHERE data LIKE '%=12345%' or SELECT * FROM users WHERE data LIKE '%proileId=12345%' I can come up for reasons why each of these might out perform the other, but I'm curious to know the logic.

    Read the article

  • Very simple python functions takes spends long time in function and not subfunctions

    - by John Salvatier
    I have spent many hours trying to figure what is going on here. The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening? Have I done something obviously inefficient? Am I misunderstanding how cProfile works? def grad_logp(self, variable, calculation_set ): p = params(self.p,self.parents) return self.n(variable, self.p) def n (self, variable, p ): gradient = self.gg(variable, p) return np.reshape(gradient, np.shape(variable.value)) def gg(self, variable, p): if variable is self: gradient = self._grad_logps['x']( x = self.value, **p) else: gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()]) return gradient

    Read the article

  • Is there a better way to count the messages in an Message Queue (MSMQ)?

    - by Damovisa
    I'm currently doing it like this: MessageQueue queue = new MessageQueue(".\Private$\myqueue"); MessageEnumerator messageEnumerator = queue.GetMessageEnumerator2(); int i = 0; while (messageEnumerator.MoveNext()) { i++; } return i; But for obvious reasons, it just feels wrong - I shouldn't have to iterate through every message just to get a count, should I? Is there a better way?

    Read the article

  • What is the cost of object creating.

    - by Tony
    Hi If I have to choose between static method and creating an instance and use instance method, I will choose static methods always. but what is the detailed overhead of creating an instance? for example I saw a DAL which can be done with static classes but they choose to make it instance now in the BLL at every single call they call something like. new Customer().GetData(); how far this can be bad? Thanks

    Read the article

  • Caching result of SELECT statement for reuse in multiple queries

    - by Andrew
    I have a reasonably complex query to extract the Id field of the results I am interested in based on parameters entered by the user. After extracting the relevant Ids I am using the resulting set of Ids several times, in separate queries, to extract the actual output record sets I want (by joining to other tables, using aggregate functions, etc). I would like to avoid running the initial query separately for every set of results I want to return. I imagine my situation is a common pattern so I am interested in what the best approach is. The database is in MS SQL Server and I am using .NET 3.5.

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >