Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 23/151 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Simple but efficient way to store a series of small changes to an image?

    - by finnw
    I have a series of images. Each one is typically (but not always) similar to the previous one, with 3 or 4 small rectangular regions updated. I need to record these changes using a minimum of disk space. The source images are not compressed, but I would like the deltas to be compressed. I need to be able to recreate the images exactly as input (so a lossy video codec is not appropriate.) I am thinking of something along the lines of: Composite the new image with a negative of the old image Save the composited image in any common format that can compress using RLE (probably PNG.) Recreate the second image by compositing the previous image with the delta. Although the images have an alpha channel, I can ignore it for the purposes of this function. Is there an easy-to-implement algorithm or free Java library with this capability?

    Read the article

  • Is there a more efficient AS3 way to compare 2 arrays for adds, removes & updates?

    - by WillyCornbread
    Hi all - I'm wondering if there is a better way to approach this than my current solution... I have a list of items, I then retrieve another list of items. I need to compare the two lists and come up with a list of items that are existing (for update), a list that are not existing in the new list (for removal) and a list of items that are not existing in the old list (for adding). Here is what I'm doing now - basically creating a lookup object for testing if an item exists. Thanks for any tips. for each (itm in _oldItems) { _oldLookup[itm.itemNumber] = itm; } // Loop through items and check if they already exist in the 'old' list for each (itm in _items) { // If an item exists in the old list - push it for update if (_oldLookup[itm.itemNumber]) { _itemsToUpdate.push(itm); } else // otherwise push it into the items to add { _itemsToAdd.push(itm); } // remove it from the lookup list - this will leave only // items for removal remaining in the lookup delete _oldLookup[itm.itemNumber]; } // The items remaining in the lookup object have neither been added or updated - // so they must be for removal - add to list for removal for each (itm in _oldLookup) { _itemsToRemove.push(itm); }

    Read the article

  • t-sql most efficient row to column? for xml path, pivot

    - by ajberry
    create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 -- select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema above, but concept is similar) in both fixed width and delimited formats. The above FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. I've looked at pivot but most of the examples I have found are aggregating information. I just to combine the child rows and tack them onto the parent. For example, for an order it would need to output: OrderId,Product1,Product2,Product3,etc Thoughts or suggestions? I am using SQL Server 2k5.

    Read the article

  • How to shrink-to-fit an std::vector in a memory-efficient way?

    - by dehmann
    I would like to 'shrink-to-fit' an std::vector, to reduce its capacity to its exact size, so that additional memory is freed. The standard trick seems to be the one described here: template< typename T, class Allocator > void shrink_capacity(std::vector<T,Allocator>& v) { std::vector<T,Allocator>(v.begin(),v.end()).swap(v); } The whole point of shrink-to-fit is to save memory, but doesn't this method first create a deep copy and then swaps the instances? So at some point -- when the copy is constructed -- the memory usage is doubled? If that is the case, is there a more memory-friendly method of shrink-to-fit? (In my case the vector is really big and I cannot afford to have both the original plus a copy of it in memory at any time.)

    Read the article

  • t-sql most efficient row to column? crosstab for xml path, pivot

    - by ajberry
    I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema below, but concept is similar) in both fixed width and delimited formats. The below FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I've looked at pivot but most of the examples I have found are aggregating information. I just want to combine the child rows and tack them onto the parent. I should also point out I don't need to deal with the column names either since the output of the child rows will either be a fixed width string or a delimited string. For example, given the following tables: OrderId CustomerId ----------- ----------- 1 1 2 2 3 3 DetailId OrderId ProductId ----------- ----------- ----------- 1 1 100 2 1 158 3 1 234 4 2 125 5 3 101 6 3 105 7 3 212 8 3 250 for an order I need to output: orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250 or orderid Products ----------- ----------------------- 1 100|158|234 2 125 3 101|105|212|250 Thoughts or suggestions? I am using SQL Server 2k5. Example Setup: create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 using FOR XML PATH: select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o which outputs what I want, however is very slow for large amounts of data. One of the child tables is over 2 million rows, pushing the processing time out to ~ 4 hours. orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250

    Read the article

  • Would watching a file for changes or redundantly querying that file be more efficient?

    - by badpanda
    I am wondering whether watching a file/directory for changes using the FileSystemWatcher class is extremely memory intensive. I am developing a desktop application in C# that will be running behind the scenes continuously on low-performance computers, and I need some way of checking to see if various files have changed. I can think of a few solutions: Watch the directories using FileSystemWatcher. Run a timed thread on an interval that goes through and manually checks this. Check manually every time the actionhandler thread runs (the program will occasionally do something, on an action). Any suggestions? Thanks! badPanda

    Read the article

  • Efficient way to create/unpack large bitfields in C?

    - by drhorrible
    I have one microcontroller sampling from a lot of ADC's, and sending the measurements over a radio at a very low bitrate, and bandwidth is becoming an issue. Right now, each ADC only give us 10 bits of data, and its being stored in a 16-bit integer. Is there an easy way to pack them in a deterministic way so that the first measurement is at bit 0, second at bit 10, third at bit 20, etc? To make matters worse, the microcontroller is little endian, and I have no control over the endianness of the computer on the other side.

    Read the article

  • SSIS - Parallel Execution of Tasks - How efficient is it?

    - by Randy Minder
    I am building an SSIS package that will contain dozens of Sequence tasks. Each Sequence task will contain three tasks. One to truncate a destination table and remove indexes on the table, another to import data from a source table, and a third to add back indexes to the destination table. My question is this. I currently have nine of these Sequences tasks built, and none are dependent on any of the others. When I execute the package, SSIS seems to do a pretty good job of determining which tasks in which Sequence to execute, which, by the way, appears to be quite random. As I continue adding more Sequences, should I attempt to be smarter about how SSIS should execute these Sequences, or is SSIS smart enough to do it itself? Thanks.

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • Should I write more SQL to be more efficient, or less SQL to be less buggy?

    - by RenderIn
    I've been writing a lot of one-off SQL queries to return exactly what a certain page needs and no more. I could reuse existing queries and issue a number of SQL requests linear to the number of records on the page. As an example, I have a query to return People and a query to return Job Details for a person. To return a list of people with their job details I could query once for people and then once for each person to retrieve their job details. I've found that in most cases that solution returns things in a reasonable amount of time, but I don't know how well it will scale in my environment. Instead I've been writing queries to join people + job details, or people + salary history, etc. I'm looking at my models and I see how I could shave off maybe 30% of my code if I were to re-use existing queries. This is a big temptation. Is it a bad thing to go for reuse over efficiency in general or does it all come down to the specific situation? Should I first do it the easy way and then optimize later, or is it best to get the code knocked out while everything is fresh in my mind? Thoughts, experiences?

    Read the article

  • What is the most efficient/elegant way to parse a flat table into a tree?

    - by Tomalak
    Assume you have a flat table that stores an ordered tree hierarchy: Id Name ParentId Order 1 'Node 1' 0 10 2 'Node 1.1' 1 10 3 'Node 2' 0 20 4 'Node 1.1.1' 2 10 5 'Node 2.1' 3 10 6 'Node 1.2' 1 20 What minimalistic approach would you use to output that to HTML (or text, for that matter) as a correctly ordered, correctly intended tree? Assume further you only have basic data structures (arrays and hashmaps), no fancy objects with parent/children references, no ORM, no framework, just your two hands. The table is represented as a result set, which can be accessed randomly. Pseudo code or plain English is okay, this is purely a conceptional question. Bonus question: Is there a fundamentally better way to store a tree structure like this in a RDBMS? EDITS AND ADDITIONS To answer one commenter's (Mark Bessey's) question: A root node is not necessary, because it is never going to be displayed anyway. ParentId = 0 is the convention to express "these are top level". The Order column defines how nodes with the same parent are going to be sorted. The "result set" I spoke of can be pictured as an array of hashmaps (to stay in that terminology). For my example was meant to be already there. Some answers go the extra mile and construct it first, but thats okay. The tree can be arbitrarily deep. Each node can have N children. I did not exactly have a "millions of entries" tree in mind, though. Don't mistake my choice of node naming ('Node 1.1.1') for something to rely on. The nodes could equally well be called 'Frank' or 'Bob', no naming structure is implied, this was merely to make it readable. I have posted my own solution so you guys can pull it to pieces.

    Read the article

  • Whats the most efficient MySQL column types for this data?

    - by AlabamaKush
    I have several tables with some pretty standard data in each. Can somebody help me optimize them by telling me the best column types for this data. Whats beside them is what I have currently. Number (max length 7) --> MEDIUMINT(8) Unsigned Text (max length 30) --> VARCHAR(30) Text (max length 200) --> VARCHAR(200) Number (max length 4) --> SMALLINT(5) Unsigned Number (either 0 or 1) --> TINYINT(1) Unsigned Text (max length 500) --> TEXT Any suggestions? I'm just guessing with this so I know some of them are wrong...

    Read the article

  • Most efficient algorithm for mesh-level, optimal occlusion culling?

    - by Fredriku73
    I am new to culling. On a first glance, it seems that most occlusion culling algorithms are object-level, not examining single meshes, which would be practical for game rendering. What I am looking for is an algorithm that culls all meshes within a single object that are occluded for a given viewpoint, with high accuracy. It needs to be at least O(n log n), a naive mesh-by-mesh comparison (O(n^2)) is too slow. I notice that the Blender GUI identifies the occluded meshes for you in real-time, even if you work with large objects of 10,000+ meshes. What algorithm is used there, pray tell?

    Read the article

  • What most efficient method to find a that triangle which contains the given point?

    - by Christo
    Given the triangle with vertices (a,b,c): c / \ / \ / \ a - - - b Which is then subdivided into four triangles by halving each of the edges: c / \ / \ ca / \ bc /\ - - - /\ / \ / \ / \ / \ a- - - - ab - - - -b Wich results in four triangles (a, ab, ca), (b, bc, ab), (c, ca, bc), (ab, bc, ca). Now given a point p. How do I determine in which triangle p lies, given that p is within the outer triangle (a, b, c)? Currently I intend to use ab as the origin. Check whether it is to the left of right of the line "ca - ab" using the perp of "ca - ab" and checking the sign against the dot product of "ab - a" and the perp vector and the vector "p - ab". If it is the same or the dot product is zero then it must be in (a, ab, ca)... Continue with this procedure with the other outer triangles (b, ba, ab) & (c, ca, ba). In the end if it didn't match with these it must be contained within the inner triangle (ab, bc, ca). Is there a better way to do it?

    Read the article

  • How do I create efficient instance variable mutators in Matlab?

    - by Trent B
    Previously, I implemented mutators as follows, however it ran spectacularly slowly on a recursive OO algorithm I'm working on, and I suspected it may have been because I was duplicating objects on every function call... is this correct? %% Example Only obj2 = tripleAllPoints(obj1) obj.pts = obj.pts * 3; obj2 = obj1 end I then tried implementing mutators without using the output object... however, it appears that in MATLAB i can't do this - the changes won't "stick" because of a scope issue? %% Example Only tripleAllPoints(obj1) obj1.pts = obj1.pts * 3; end For application purposes, an extremely simplified version of my code (which uses OO and recursion) is below. classdef myslice properties pts % array of pts nROW % number of rows nDIM % number of dimensions subs % sub-slices end % end properties methods function calcSubs(obj) obj.subs = cell(1,obj.nROW); for i=1:obj.nROW obj.subs{i} = myslice; obj.subs{i}.pts = obj.pts(1:i,2:end); end end function vol = calcVol(obj) if obj.nROW == 1 obj.volume = prod(obj.pts); else obj.volume = 0; calcSubs(obj); for i=1:obj.nROW obj.volume = obj.volume + calcVol(obj.subs{i}); end end end end % end methods end % end classdef

    Read the article

  • Memory read/write access efficiency

    - by wolfPack88
    I've heard conflicting information from different sources, and I'm not really sure which one to believe. As such, I'll post what I understand and ask for corrections. Let's say I want to use a 2D matrix. There are three ways that I can do this (at least that I know of). 1: int i; char **matrix; matrix = malloc(50 * sizeof(char *)); for(i = 0; i < 50; i++) matrix[i] = malloc(50); 2: int i; int rowSize = 50; int pointerSize = 50 * sizeof(char *); int dataSize = 50 * 50; char **matrix; matrix = malloc(dataSize + pointerSize); char *pData = matrix + pointerSize - rowSize; for(i = 0; i < 50; i++) { pData += rowSize; matrix[i] = pData; } 3: //instead of accessing matrix[i][j] here, we would access matrix[i * 50 + j] char *matrix = malloc(50 * 50); In terms of memory usage, my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient, for the reasons below: 3: There is only one pointer and one allocation, and therefore, minimal overhead. 2: Once again, there is only one allocation, but there are now 51 pointers. This means there is 50 * sizeof(char *) more overhead. 1: There are 51 allocations and 51 pointers, causing the most overhead of all options. In terms of performance, once again my understanding is that 3 is the most efficient, 2 is next, and 1 is least efficient. Reasons being: 3: Only one memory access is needed. We will have to do a multiplication and an addition as opposed to two additions (as in the case of a pointer to a pointer), but memory access is slow enough that this doesn't matter. 2: We need two memory accesses; once to get a char *, and then to the appropriate char. Only two additions are performed here (once to get to the correct char * pointer from the original memory location, and once to get to the correct char variable from wherever the char * points to), so multiplication (which is slower than addition) is not required. However, on modern CPUs, multiplication is faster than memory access, so this point is moot. 1: Same issues as 2, but now the memory isn't contiguous. This causes cache misses and extra page table lookups, making it the least efficient of the lot. First and foremost: Is this correct? Second: Is there an option 4 that I am missing that would be even more efficient?

    Read the article

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • What is the most efficient way to find missing semicolons in VS with C++?

    - by Dr. Monkey
    What are the best strategies for finding that missing semicolon that's causing the error? Are there automated tools that might help. I'm currently using Visual Studio 2008, but general strategies for any environment would be interesting and more broadly useful. Background: Presently I have a particularly elusive missing semicolon (or brace) in a C++ program that is causing a C2143 error. My header file dependencies are fairly straightforward, but still I can't seem to find the problem. Rather than post my code and play Where's Wally (or Waldo, depending on where you're from) I thought it would be more useful to get some good strategies that can be applied in this and similar situations. As a side-question: the C2143 error is showing up in the first line of the first method declaration (i.e. the method's return type) in a .cpp file that includes only its associated .h file. Would anything other than semicolons or braces lead to this behaviour?

    Read the article

  • Favouriting things in a database - most efficient method of keeping track?

    - by a2h
    I'm working on a forum-like webapp where I'd like to allow users to favourite an item so that they can keep track of it, and also so that others can see how many times an item's been favourited. The problem is, I'm unsure on the best practices for databases, which includes this situation. I have two ideas in my head on how to do this: Add an extra column to the user table and store things like so: "|2|5|73|" Add an extra table with at least two columns, one for referencing an item, the other for referencing a user. I feel uncomfortable about going for the second method as it involves an extra table, and potentially more queries would be required. Perhaps these beliefs aren't an issue, as I have little understanding of databases beyond simply working with table layouts and basic queries.

    Read the article

  • How can I make this SQL query more efficient? PHP.

    - by Alan Grant
    Hi all, I have a system whereby a user can view categories that they've subscribed to individually, and also those that are available in the region they belong in by default. So, the tables are as follows: Categories UsersCategories RegionsCategories I'm querying the db for all the categories within their region, and also all the individual categories that they've subscribed to. My query is as follows: Select * FROM (categories c) LEFT JOIN users_categories uc on uc.category_id = c.id LEFT JOIN regions_categories rc on rc.category_id = c.id WHERE (rc.region_id = ? OR uc.user_id = ?) At least I believe that's the query, I'm creating it using Cake's ORM layer, so the exact one is: $conditions = array( array( "OR" => array ( 'RegionsCategories.region_id' => $region_id, 'UsersCategories.user_id' => $user_id ) )); $this->find('all', $conditions); This turns out to be incredibly slow (sometimes around 20 seconds or so. Each table has around 5,000 rows). Is my design at fault here? How can I retrieve both the users' individual categories and those within their region all in one query without it taking ages? Thanks!

    Read the article

  • What is the most efficient way to store a mapping "key -> event stream"?

    - by jkff
    Suppose there are ~10,000's of keys, where each key corresponds to a stream of events. I'd like to support the following operations: push(key, timestamp, event) - pushes event to the event queue for key, marked with the given timestamp. It is guaranteed that event timestamps for a particular key are pushed in sorted or almost sorted order. tail(key, timestamp) - get all events for key since the given timestamp. Usually the timestamp requests for a given key are almost monotonically increasing, almost synchronously with pushes for the same key. This stuff has to be persistent (although it is not absolutely necessary to persist pushes immediately and to keep tails with pushes strictly in sync), so I'm going to use some kind of database. What is the optimal kind of database structure for this task? Would it be better to use a relational database, a key-value storage, or something else?

    Read the article

  • What's the most efficient way to format the following string?

    - by Ian P
    I have a very simple question, and I shouldn't be hung up on this, but I am. Haha! I have a string that I receive in the following format(s): 123 123456-753 123455-43 234234-4 123415 The desired output, post formatting, is: 123-455-444 123-455-55 123-455-5 or 123-455 The format is ultimately dependent upon the total number of characters in the original string.. I have several ideas of how to do this, but I keep thing there's a better way than string.Replace and concatenate... Thanks for the suggestions.. Ian

    Read the article

  • What is the most efficient procedure for implementing a sortable ajax list on the backend?

    - by HenryL
    The most common method is to assign a sequential order field for each item in the list and do an update that maintains the sequence with every ajax sort operation. Unfortunately, this requires an update to each item of the list every time someone sorts. This is fine for small lists, but what's the best way to implement sorting for larger lists that are constantly updated? I am looking for something that minimizes DB IO.

    Read the article

  • Most efficient way to download media files from a server from within an iPhone app.

    - by nikki
    Hi, I have an iPhone application which contains a lot of media files, so far we have been able to keep the size of the iPhone app to less than 20 MB. We add new content with new updates, it would be difficult to keep the app size within the 20 MB limit. The right thing to do here would be to fetch the files on demand from a media/web server. If you have any pointers on how to go about doing this, please suggest. What we would need is to fetch the media files (.jpg/.png images ) on demand when the user taps on a small thumbnail for the image. Are there existing frameworks/servers that provide such services to iPhone clients. Thanks.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >