Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 45/361 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • What is an efficient method for partitioning and aggregating intervals from timestamped rows in a da

    - by mattrepl
    From a data frame with timestamped rows (strptime results), what is the best method for aggregating statistics for intervals? Intervals could be an hour, a day, etc. I've found the aggregate function, but that doesn't help with assigning each row to an interval. I'm planning on adding a column to the data frame that denotes interval and using that with aggregate, but if there's a better solution it'd be great to hear it. Thanks for any pointers!

    Read the article

  • Building an *efficient* if/then interface for non-technical users to build flow-control in PHP

    - by Brendan
    I am currently building an internal tool to be used by our management to control the flow of traffic. I have built an if/then interface allowing the user to set conditions for certain outcomes, however it is inefficient to use the switch statement to control the flow. How can I improve the efficiency of my code? Example of code: if($previous['route_id'] == $condition['route_id'] && $failed == 0) //if we have not moved on to a new set of rules and we haven't failed yet { switch($condition['type']) { case 0 : $type = $user['hour']; break; case 1 : $type = $user['location']['region_abv']; break; case 2 : $type = $user['referrer_domain']; break; case 3 : $type = $user['affiliate']; break; case 4 : $type = $user['location']['country_code']; break; case 5 : $type = $user['location']['city']; break; } $type = strtolower($type); $condition['value'] = strtolower($condition['value']); switch($condition['operator']) { case 0 : if($type == $condition['value']); else $failed = '1'; break; case 1 : if($type != $condition['value']); else $failed = '1'; break; case 2 : if($type > $condition['value']); else $failed = '1'; break; case 3 : if($type >= $condition['value']); else $failed = '1'; break; case 4 : if($type < $condition['value']); else $failed = '1'; break; case 5 : if($type <= $condition['value']); else $failed = '1'; break; } }

    Read the article

  • Most efficient way to extract bit flags

    - by Hallik
    I have these possible bit flags. 1, 2, 4, 8, 16, 64, 128, 256, 512, 2048, 4096, 16384, 32768, 65536 So each number is like a true/false statement on the server side. So if the first 3 items, and only the first 3 items are marked "true" on the server side, the web service will return a 7. Or if all 14 items above are true, I would still get a single number back from the web service which is is the sum of all those numbers. What is the best way to handle the number I get back to find out which items are marked as "true"?

    Read the article

  • Efficient splitting of elements in a field

    - by Gary
    I have a field in a text file exported from a database. The field contains addresses but sometimes they are quite long and the database allows them to contain multiple lines. When exported, the newline character gets replaced with a dollar sign like this: first part of very long address$second part of very long address$third part of very long address Not every address has multiple lines and no address contains more than three lines. The length of each line is variable. I'm massaging the data for import into MS Access which is used for a mailmerge. I want to split the field on the $ sign if it's there but if the field only contains 1 line, I want to set my two extra output fields to a zero length string so that I don't wind up with blank lines in the address when it gets printed. I have an awk file that's working correctly on all the other data in the textfile but I need to get this last bit working. I tried the below code. Aside from the fact that I get a syntax error at the else, I'm not sure this is a good way to do what I want. This is being done with gawk on Windows. BEGIN { FS = "|" } $1 != "HEADER" { if ($6 ~ /\$/) split($6, arr, "$") address = arr[1] addresstwo = arr[2] addressthree = arr[3] addressLength = length(address) addressTwoLength = length(addresstwo) addressThreeLength = length(addressthree) else { address = $6 addressLength = length($6) addresstwo = "" addressTwoLength = length(addresstwo) addressthree = "" addressThreeLength = length(addressthree) } printf("%*s\t%*s\t\%*s\n", addressLength, address, addressTwoLength, addresstwo, addressThreeLength, addressthree) }

    Read the article

  • Most efficient way to combine two objects in C#

    - by Dested
    I have two objects that can be represented as an int, float, bool, or string. I need to perform an addition on these two objects with the results being the same thing c# would produce as a result. For instance 1+"Foo" would equal the string "1Foo", 2+2.5 would equal the float 5.5, and 3+3 would equal the int 6 . Currently I am using the code below but it seems like incredible overkill. Can anyone simplify or point me to some way to do this efficiently? private object Combine(object o, object o1) { float left = 0; float right = 0; bool isInt = false; string l = null; string r = null; if (o is int) { left = (int)o; isInt = true; } else if (o is float) { left = (float)o; } else if (o is bool) { l = o.ToString(); } else { l = (string)o; } if (o1 is int) { right = (int)o1; } else if (o is float) { right = (float)o1; isInt = false; } else if (o1 is bool) { r = o1.ToString(); isInt = false; } else { r = (string)o1; isInt = false; } object rr; if (l == null) { if (r == null) { rr = left + right; } else { rr = left + r; } } else { if (r == null) { rr = l + right; } else { rr = l + r; } } if (isInt) { return Convert.ToInt32(rr); } return rr; }

    Read the article

  • Efficient way of calculating average difference of array elements from array average value

    - by Saysmaster
    Is there a way to calculate the average distance of array elements from array average value, by only "visiting" each array element once? (I search for an algorithm) Example: Array : [ 1 , 5 , 4 , 9 , 6 ] Average : ( 1 + 5 + 4 + 9 + 6 ) / 5 = 5 Distance Array : [|1-5|, |5-5|, |4-5|, |9-5|, |6-5|] = [4 , 0 , 1 , 4 , 1 ] Average Distance : ( 4 + 0 + 1 + 4 + 1 ) / 5 = 2 The simple algorithm needs 2 passes. 1st pass) Reads and accumulates values, then divides the result by array length to calculate average value of array elements. 2nd pass) Reads values, accumulates each one's distance from the previously calculated average value, and then divides the result by array length to find the average distance of the elements from the average value of the array. The two passes are identical. It is the classic algorithm of calculating the average of a set of values. The first one takes as input the elements of the array, the second one the distances of each element from the array's average value. Calculating the average can be modified to not accumulate the values, but caclulating the average "on the fly" as we sequentialy read the elements from the array. The formula is: Compute Running Average of Array's elements ------------------------------------------- RA[i] = E[i] {for i == 1} RA[i] = RA[i-1] - RA[i-1]/i + A[i]/i { for i > 1 } Where A[x] is the array's element at position x, RA[x] is the average of the array's elements between position 1 and x (running average). My question is: Is there a similar algorithm, to calculate "on the fly" (as we read the array's elements), the average distance of the elements from the array's mean value? The problem is that, as we read the array's elements, the final average value of the array is not known. Only the running average is known. So calculating differences from the running average will not yield the correct result. I suppose, if such algorithm exists, it probably should have the "ability" to compensate, in a way, on each new element read for the error calculated as far.

    Read the article

  • Most efficient way to search the last x lines of a file in python

    - by Harley
    I have a file and I don't know how big it's going to be (it could be quite large, but the size will vary greatly). I want to search the last 10 lines or so to see if any of them match a string. I need to do this as quickly and efficiently as possible and was wondering if there's anything better than: s = "foo" last_bit = fileObj.readlines()[-10:] for line in last_bit: if line == s: print "FOUND"

    Read the article

  • Efficient way to update SQL 'relationship' table

    - by AmbroseChapel
    Say I have three properly normalised tables. One of people, one of qualifications and one mapping people to qualifications: People: id | Name ---------- 1 | Alice 2 | Bob Degrees: id | Name --------- 1 | PhD 2 | MA People-to-degrees: person_id | degree_id --------------------- 1 | 2 # Alice has an MA 2 | 1 # Bob has a PhD So then I have to update this mapping via my web interface. (I made a mistake. Bob has a BA, not a PhD, and Alice just got her B Eng.) There are four possible states of these one-to-many relationship mappings: was true before, should now be false was false before, should now be true was true before, should remain true was false before, should remain false what I don't want to do is read the values from four checkboxes, then hit the database four times to say "Did Bob have a BA before? Well he does now." "Did Bob have PhD before? Because he doesn't any more" and so on. How do other people address this issue? I'm curious to see if someone else arrives at the same solution I did.

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • Can parser combination be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and shift oop f op (P(g, xs)) = let h, xs = loop (op, g, xs) loop (oop, f h, xs) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) | oop, f, ('^' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) | '^' -> pown loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '-'::P(f, xs) -> let f, xs = loop ('~', f, xs) P(-f, xs) | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • Efficient mass string search problem.

    - by Monomer
    The Problem: A large static list of strings is provided. A pattern string comprised of data and wildcard elements (* and ?). The idea is to return all the strings that match the pattern - simple enough. Current Solution: I'm currently using a linear approach of scanning the large list and globbing each entry against the pattern. My Question: Are there any suitable data structures that I can store the large list into such that the search's complexity is less than O(n)? Perhaps something akin to a suffix-trie? I've also considered using bi- and tri-grams in a hashtable, but the logic required in evaluating a match based on a merge of the list of words returned and the pattern is a nightmare, and I'm not convinced its the correct approach.

    Read the article

  • Efficient banner rotation with PHP

    - by reggie
    I rotate a banner on my site by selecting it randomly from an array of banners. Sample code as demonstration: <?php $banners = array( '<iframe>...</iframe>', '<a href="#"><img src="#.jpg" alt="" /></a>', //and so on ); echo $banners(rand(0, count($banners))); ?> The array of banners has become quite big. I am concerned with the amount of memory that this array adds to the execution of my page. But I can't figure out a better way of showing a random banner without loading all the banners into memory...

    Read the article

  • Cocoa Core Data - Efficient Related Entities Counts

    - by Gary
    I am working on my first iPhone application and I've hit a wall. I'm trying to develop a 'statistics' page for a three entity relationship. My entities are the following: Department - Name, Address, Building, etc. People - Name, Gender (BOOL), Phone, etc If I have fetched a specific department how do I filter those results and only return people that are Male (Gender == 0)? If I do NSLog(@"%d", [department.people count]); I get the correct number of people in that department so I know I'm in the neighborhood. I know I could re-fetch and modify the predicate each time but with 20+ stats in my app that seems inefficient. Thanks for any advice!

    Read the article

  • Efficient process SQL Server and numerical methods library

    - by darkcminor
    Is there a way to comunicate a numeric method library, that can exploit all .net's (numerical methods called by .net that does SQL Server things) habilities?, What library do you recomend, maybe using MATLAB, R? How to comunicate SQL Server and .net with such library or libraries? Do you have an example? What steps must be followed to make the link between numerical libraries, .net and SQL Server

    Read the article

  • efficient list mapping in python

    - by Joey
    Hi everyone, I have the following input: input = [(dog, dog, cat, mouse), (cat, ruby, python, mouse)] and trying to have the following output: outputlist = [[0, 0, 1, 2], [1, 3, 4, 2]] outputmapping = {0:dog, 1:cat, 2:mouse, 3:ruby, 4:python, 5:mouse} Any tips on how to handle given with scalability in mind (var input can get really large).

    Read the article

  • Efficient Method for Preventing Hotlinking via .htaccess

    - by Michael Robinson
    I need to confirm something before I go accuse someone of ... well I'd rather not say. The problem: We allow users to upload images and embed them within text on our site. In the past we allowed users to hotlink to our images as well, but due to server load we unfortunately had to stop this. Current "solution": The method the programmer used to solve our "too many connections" issue was to rename the file that receives and processes image requests (image_request.php) to image_request2.php, and replace the contents of the original with <?php header("HTTP/1.1 500 Internal Server Error") ; ?> Obviously this has caused all images with their src attribute pointing to the original image_request.php to be broken, and is also the wrong code to be sending in this case. Proposed solution: I feel a more elegant solution would be: In .htaccess If the request is for image_request.php Check referrer If referrer is not our site, send the appropriate header If referrer is our site, proceed to image_request.php and process image request What I would like to know is: Compared to simply returning a 500 for each request to image_request.php: How much more load would be incurred if we were to use my proposed alternative solution outlined above? Is there a better way to do this? Our main concern is that the site stays up. I am not willing to agree that breaking all internally linked images is the best / only way to solve this. I refuse to tell our users that because of something WE changed they must now manually change the embed code in all their previously uploaded content.

    Read the article

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • Efficient data structure design

    - by Sway
    Hi there, I need to match a series of user inputed words against a large dictionary of words (to ensure the entered value exists). So if the user entered: "orange" it should match an entry "orange' in the dictionary. Now the catch is that the user can also enter a wildcard or series of wildcard characters like say "or__ge" which would also match "orange" The key requirements are: * this should be as fast as possible. * use the smallest amount of memory to achieve it. If the size of the word list was small I could use a string containing all the words and use regular expressions. however given that the word list could contain potentially hundreds of thousands of enteries I'm assuming this wouldn't work. So is some sort of 'tree' be the way to go for this...? Any thoughts or suggestions on this would be totally appreciated! Thanks in advance, Matt

    Read the article

  • Efficient calculation of matrix cumulative standard deviation in r

    - by Abiel
    I recently posted this question on the r-help mailing list but got no answers, so I thought I would post it here as well and see if there were any suggestions. I am trying to calculate the cumulative standard deviation of a matrix. I want a function that accepts a matrix and returns a matrix of the same size where output cell (i,j) is set to the standard deviation of input column j between rows 1 and i. NAs should be ignored, unless cell (i,j) of the input matrix itself is NA, in which case cell (i,j) of the output matrix should also be NA. I could not find a built-in function, so I implemented the following code. Unfortunately, this uses a loop that ends up being somewhat slow for large matrices. Is there a faster built-in function or can someone suggest a better approach? cumsd <- function(mat) { retval <- mat*NA for (i in 2:nrow(mat)) retval[i,] <- sd(mat[1:i,], na.rm=T) retval[is.na(mat)] <- NA retval } Thanks.

    Read the article

  • An Efficient data structure for Sorted List

    - by holydiver
    I want to save my objects according to a key in the attributes of my object in a sorted fashion. Later on I'll access these objects sequentially from max key to min key. I'll do some search tasks as well. I consider to use either AVL tree or RB Tree. As far as i know they are nearly equivalent in theory(Both have O(logn)). But in practice which one might be better in performance in my situation. And is there a better alternative than those, considering that I'll mostly do insert and sequentially access to the ds.

    Read the article

  • Efficient paging with large tables in sql 2008

    - by Kumar
    for tables with 1,000,000 rows and possibly many many more ! haven't done any benchmarking myself so wanted to get the experts opinion. Looked at some articles on row_number() but it seems to have performance implications What are the other choices/alternatives ?

    Read the article

  • Efficient representation of Hierarchies in Hibernate.

    - by Alison G
    I'm having some trouble representing an object hierarchy in Hibernate. I've searched around, and haven't managed to find any examples doing this or similar - you have my apologies if this is a common question. I have two types which I'd like to persist using Hibernate: Groups and Items. * Groups are identified uniquely by a combination of their name and their parent. * The groups are arranged in a number of trees, such that every Group has zero or one parent Group. * Each Item can be a member of zero or more Groups. Ideally, I'd like a bi-directional relationship allowing me to get: * all Groups that an Item is a member of * all Items that are a member of a particular Group or its descendants. I also need to be able to traverse the Group tree from the top in order to display it on the UI. The basic object structure would ideally look like this: class Group { ... /** @return all items in this group and its descendants */ Set<Item> getAllItems() { ... } /** @return all direct children of this group */ Set<Group> getChildren() { ... } ... } class Item { ... /** @return all groups that this Item is a direct member of */ Set<Group> getGroups() { ... } ... } Originally, I had just made a simple bi-directional many-to-many relationship between Items and Groups, such that fetching all items in a group hierarchy required recursion down the tree, and fetching groups for an Item was a simple getter, i.e.: class Group { ... private Set<Item> items; private Set<Group> children; ... /** @return all items in this group and its descendants */ Set<Item> getAllItems() { Set<Item> allItems = new HashSet<Item>(); allItems.addAll(this.items); for(Group child : this.getChildren()) { allItems.addAll(child.getAllItems()); } return allItems; } /** @return all direct children of this group */ Set<Group> getChildren() { return this.children; } ... } class Item { ... private Set<Group> groups; /** @return all groups that this Item is a direct member of */ Set<Group> getGroups() { return this.groups; } ... } However, this resulted in multiple database requests to fetch the Items in a Group with many descendants, or for retrieving the entire Group tree to display in the UI. This seems very inefficient, especially with deeper, larger group trees. Is there a better or standard way of representing this relationship in Hibernate? Am I doing anything obviously wrong or stupid? My only other thought so far was this: Replace the group's id, parent and name fields with a unique "path" String which specifies the whole ancestry of a group, e.g.: /rootGroup /rootGroup/aChild /rootGroup/aChild/aGrandChild The join table between Groups and Items would then contain group_path and item_id. This immediately solves the two issues I was suffering previously: 1. The entire group hierarchy can be fetched from the database in a single query and reconstructed in-memory. 2. To retrieve all Items in a group or its descendants, we can select from group_item where group_path='N' or group_path like 'N/%' However, this seems to defeat the point of using Hibernate. All thoughts welcome!

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >