Search Results

Search found 1367 results on 55 pages for 'cyclomatic complexity'.

Page 7/55 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Whats the best data-structure for storing 2-tuple (a, b) which support adding, deleting tuples and c

    - by bhups
    Hi So here is my problem. I want to store 2-tuple (key, val) and want to perform following operations: - keys are strings and values are Integers - multiple keys can have same value - adding new tuples - updating any key with new value (any new value or updated value is greater than the previous one, like timestamps) - fetching all the keys with values less than or greater than given value - deleting tuples. Hash seems to be the obvious choice for updating the key's value but then lookups via values will be going to take longer (O(n)). The other option is balanced binary search tree with key and value switched. So now lookups via values will be fast (O(lg(n))) but updating a key will take (O(n)). So is there any data-structure which can be used to address these issues? Thanks.

    Read the article

  • Way to store a large dictionary with low memory footprint + fast lookups (on Android)

    - by BobbyJim
    I'm developing an android word game app that needs a large (~250,000 word dictionary) available. I need: reasonably fast look ups e.g. constant time preferable, need to do maybe 200 lookups a second on occasion to solve a word puzzle and maybe 20 lookups within 0.2 second more often to check words the user just spelled. EDIT: Lookups are typically asking "Is in the dictionary?". I'd like to support up to two wildcards in the word as well, but this is easy enough by just generating all possible letters the wildcards could have been and checking the generated words (i.e. 26 * 26 lookups for a word with two wildcards). as it's a mobile app, using as little memory as possible and requiring only a small initial download for the dictionary data is top priority. My first naive attempts used Java's HashMap class, which caused an out of memory exception. I've looked into using the SQL lite databases available on android, but this seems like overkill. What's a good way to do what I need?

    Read the article

  • What data is actually stored in a B-tree database in CouchDB?

    - by Andrey Vlasovskikh
    I'm wondering what is actually stored in a CouchDB database B-tree? The CouchDB: The Definitive Guide tells that a database B-tree is used for append-only operations and that a database is stored in a single B-tree (besides per-view B-trees). So I guess the data items that are appended to the database file are revisions of documents, not the whole documents: +---------|### ... | | +------|###|------+ ... ---+ | | | | +------+ +------+ +------+ +------+ | doc1 | | doc2 | | doc1 | ... | doc1 | | rev1 | | rev1 | | rev2 | | rev7 | +------+ +------+ +------+ +------+ Is it true? If it is true, then how the current revision of a document is determined based on such a B-tree? Doesn't it mean, that CouchDB needs a separate "view" database for indexing current revisions of documents to preserve O(log n) access? Wouldn't it lead to race conditions while building such an index? (as far as I know, CouchDB uses no write locks).

    Read the article

  • Why are difference lists more efficient than regular concatenation?

    - by Craig Innes
    I am currently working my way through the Learn you a haskell book online, and have come to a chapter where the author is explaining that some list concatenations can be ineffiecient: For example ((((a ++ b) ++ c) ++ d) ++ e) ++ f Is supposedly inefficient. The solution the author comes up with is to use 'difference lists' defined as newtype DiffList a = DiffList {getDiffList :: [a] -> [a] } instance Monoid (DiffList a) where mempty = DiffList (\xs -> [] ++ xs) (DiffList f) `mappend` (DiffList g) = DiffList (\xs -> f (g xs)) I am struggling to understand why DiffList is more computationally efficient than a simple concatenation in some cases. Could someone explain to me in simple terms why the above example is so inefficient, and in what way the DiffList solves this problem?

    Read the article

  • is it possible to write a program which prints its own source code utilizing a "sequence-generating-

    - by guest
    is it possible to write a program which prints its own source code utilizing a "sequence-generating-function"? what i call a sequence-generating-function is simply a function which returns a value out of a specific interval (i.e. printable ascii-charecters (32-126)). the point now is, that this generated sequence should be the programs own source-code. as you see, implementing a function which returns an arbitrary sequence is really trivial, but since the returned sequence must contain the implementation of the function itself it is a highly non-trivial task. this is how such a program (and its corresponding output) could look like #include <stdio.h> int fun(int x) { ins1; ins2; ins3; . . . return y; } int main(void) { int i; for ( i=0; i<size of the program; i++ ) { printf("%c", fun(i)); } return 0; } i personally think it is not possible, but since i don't know very much about the underlying matter i posted my thoughts here. i'm really looking forward to hear some opinions!

    Read the article

  • Java: Calculate distance between a large number of locations and performance

    - by Ally
    I'm creating an application that will tell a user how far away a large number of points are from their current position. Each point has a longitude and latitude. I've read over this article http://www.movable-type.co.uk/scripts/latlong.html and seen this post http://stackoverflow.com/questions/837872/calculate-distance-in-meters-when-you-know-longitude-and-latitude-in-java There are a number of calculations (50-200) that need carried about. If speed is more important than the accuracy of these calculations, which one is best?

    Read the article

  • Nested loop with dependent bounds trip count

    - by aaa
    hello. just out of curiosity I tried to do the following, which turned out to be not so obvious to me; Suppose I have nested loops with runtime bounds, for example: t = 0 // trip count for l in 0:N for k in 0:N for j in max(l,k):N for i in k:j+1 t += 1 t is loop trip count is there a general algorithm/way (better than N^4 obviously) to calculate loop trip count? I am working on the assumption that the iteration bounds depend only on constant or previous loop variables.

    Read the article

  • algorithm to find longest non-overlapping sequences

    - by msalvadores
    I am trying to find the best way to solve the following problem. By best way I mean less complex. As an input a list of tuples (start,length) such: [(0,5),(0,1),(1,9),(5,5),(5,7),(10,1)] Each element represets a sequence by its start and length, for example (5,7) is equivalent to the sequence (5,6,7,8,9,10,11) - a list of 7 elements starting with 5. One can assume that the tuples are sorted by the start element. The output should return a non-overlapping combination of tuples that represent the longest continuos sequences(s). This means that, a solution is a subset of ranges with no overlaps and no gaps and is the longest possible - there could be more than one though. For example for the given input the solution is: [(0,5),(5,7)] equivalent to (0,1,2,3,4,5,6,7,8,9,10,11) is it backtracking the best approach to solve this problem ? I'm interested in any different approaches that people could suggest. Also if anyone knows a formal reference of this problem or another one that is similar I'd like to get references. BTW - this is not homework. Edit Just to avoid some mistakes this is another example of expected behaviour for an input like [(0,1),(1,7),(3,20),(8,5)] the right answer is [(3,20)] equivalent to (3,4,5,..,22) with length 20. Some of the answers received would give [(0,1),(1,7),(8,5)] equivalent to (0,1,2,...,11,12) as right answer. But this last answer is not correct because is shorter than [(3,20)].

    Read the article

  • Perl, time efficient hash

    - by Mike
    Is it possible to use a Perl hash in a manner that has O(log(n)) lookup and insertion? By default, I assume the lookup is O(n) since it's represented by an unsorted list. I know I could create a data structure to satisfy this (ie, a tree, etc) however, it would be nicer if it was built in and could be used as a normal hash (ie, with %)

    Read the article

  • How to know if your Unit Test is "right-sized"?

    - by leeand00
    One thing that I've always noticed with my unit tests is that they get to be kind of verbose; seeing as they could also be not verbose enough, how do you get a sense of when your unit tests are the right size? I know of a good quote for this and it's: "Perfection is achieved, not when there is nothing left to add, but when there is nothing left to remove." - Antoine de Saint-Exupery.

    Read the article

  • Where should the line between property and method be?

    - by Catskul
    For many situations it is obvious whether something should be a property or a method however there are items that might be considered ambiguous. Obvious Properties: "name" "length" Obvious Methods: "SendMessage" "Print" Ambiguous: "Valid" / "IsValid" / "Validate" "InBounds" / "IsInBounds" / "CheckBounds" "AverageChildValue" / "CalcAverageChildValue" "ColorSaturation" / "SetColorSaturation" I suppose I would lean towards methods for the ambiguous, but does anyone know of a rule or convention that helps decide this? E.g. should all properties be O(1)? Should a property not be able to change other data (ColorSaturation might change R,G,B values)? Should it not be a property if there is calculation or aggregation? Just from an academic perspective, (and not because I think it's a good idea) is there a reason not to go crazy with properties and just make everything that is an interrogation of the class without taking an argument, and everything that can be changed about the class with a single argument and cant fail, a property?

    Read the article

  • How to know if your Unit Test Fixture is “right-sized”?

    - by leeand00
    How do you know if you "Test Fixture" is right-sized. And by "Test Fixture" I mean a class with a bunch of tests in it. One thing that I've always noticed with my test fixtures is that they get to be kind of verbose; seeing as they could also be not verbose enough, how do you get a sense of when your unit tests are the right size? My assumption is that (at least in the context of web development) you should have one test fixture class per page. I know of a good quote for this and it's: "Perfection is achieved, not when there is nothing left to add, but when there is nothing left to remove." - Antoine de Saint-Exupery.

    Read the article

  • What is the n in O(n) when comparing sorting algorithms?

    - by Mumfi
    The question is rather simple, but I just can't find a good enough answer. I've taken a look at the most upvoted question regarding the Big-Oh notation, namely this: Plain English explanation of Big O It says there that: For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). Now let's consider the simple bubble sort algorithm: for (int i = arr.length - 1; i > 0 ; i--) { for (int j = 0; j<i; j++) { if (arr[j] > arr[j+1]) { switchPlaces(...) } } } I know that worst case is O(n^2) and best case is O(n), but what is n exactly? If we attempt to sort an already sorted algorithm (best case), we would end up doing nothing, so why is it still O(n)? We are looping through 2 for-loops still, so if anything it should be O(n^2). n can't be the number of comparison operations, because we still compare all the elements, right? This confuses me, and I appreciate if someone could help me.

    Read the article

  • fastest way to perform string search in general and in python

    - by Rkz
    My task is to search for a string or a pattern in a list of documents that are very short (say 200 characters long). However, say there are 1 million documents of such time. What is the most efficient way to perform this search?. I was thinking of tokenizing each document and putting the words in hashtable with words as key and document number as value, there by creating a bag of words. Then perform the word search and retrieve the list of documents that contained this word. From what I can see is this operation will take O(n) operations. Is there any other way? may be without using hash-tables?. Also, is there a python library or third party package that can perform efficient searches?

    Read the article

  • Sort vector<int>(n) in O(n) time using O(m) space?

    - by Adam
    I have a vector<unsigned int> vec of size n. Each element in vec is in the range [0, m], no duplicates, and I want to sort vec. Is it possible to do better than O(n log n) time if you're allowed to use O(m) space? In the average case m is much larger than n, in the worst case m == n. Ideally I want something O(n). I get the feeling that there's a bucket sort-ish way to do this: unsigned int aux[m]; aux[vec[i]] = i; Somehow extract the permutation and permute vec. I'm stuck on how to do 3. In my application m is on the order of 16k. However this sort is in the inner loops and accounts for a significant portion of my runtime.

    Read the article

  • Can you change the type of Active Directory Password Complexity to be different than MS version?

    - by littlegeek
    Here it states that the policy must adhere to Passwords must meet complexity requirements determines whether password complexity is enforced. If this setting is enabled, user passwords meet the following requirements: The password is at least six characters long. The password contains characters from at least three of the following five categories: English uppercase characters (A - Z) English lowercase characters (a - z) Base 10 digits (0 - 9) Non-alphanumeric (For example: !, $, #, or %) Unicode characters The password does not contain three or more characters from the user's account name. They only setting is to ENABLE or DISABLE this feature. I was wondering if there is a way to change this policy? IF so where?

    Read the article

  • Disable Password Complexity/Expiration etc. Policy on Windows Server 2008

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). One of the things I like to do, for development environments only is to get rid of that excessively bothersome password policies. I like to have my password as something like p@ssword1, so they are easy to remember etc. etc. Obviously never do this in production. However, Windows Server 2008 comes with a password policy that expires my passwords every 90 days, and requires me to pick complex passwords, can’t reuse passwords etc. etc. Well here is how you disable password policy on a Windows Server 2008 machine - Run Group Policy Management (gpmc.msc) Expand to your domain, look for Forest\Domains\yourdomain\default domain policy. Go to the settings tab, right click on the tab, and choose “Edit”. This will open the Group Policy Management Editor, in which - Go to Computer Configuration\Policies\Windows Settings\Security Settings\Account Policies\Password Policy, and change the policy to whatever that suits you. Close everything, and run command prompt as administrator, and issue a “gpupdate /force” command to force the group policy update on the machine. Restart, and you’re done! :) Comment on the article ....

    Read the article

  • Disable Password Complexity/Expiration etc. Policy on Windows Server 2008

    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). This feed URL has been discontinued. Please update your reader's URL to : http://feeds.feedburner.com/winsmarts Read full article .... ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why appending to a list in Scala should have O(n) time complexity?

    - by Jubbat
    I am learning Scala at the moment and I just read that the execution time of the append operation for a list (:+) grows linearly with the size of the list. Appending to a list seems like a pretty common operation. Why should the idiomatic way to do this be prepending the components and then reversing the list? It can't also be a design failure as implementation could be changed at any point. From my point of view, both prepending and appending should be O(1). Is there any legitimate reason for this?

    Read the article

  • How to deal with elimination of duplicate logic vs. cost of complexity increase?

    - by Gabriel
    I just wrote some code that is very representative of a recurring theme (in my coding world lately): repeated logic leads to an instinct to eliminate duplication which results in something that is more complex the tradeoff seems wrong to me (the examples of the negative side aren't worth posting - but this is probably the 20th console utility I've written in the past 12 months). I'm curious if I'm missing some techniques or if this is really just on of those "experience tells you when to do what" type of issues. Here's the code... I'm tempted to leave it as is, even though there will be about 20 of those if-blocks when I'm done. static void Main(string[] sargs) { try { var urls = new DirectTrackRestUrls(); var restCall = new DirectTrackRestCall(); var logger = new ConsoleLogger(); Args args = (Args)Enum.Parse(typeof(Args), string.Join(",", sargs)); if (args.HasFlag(Args.Campaigns)) { var getter = new ResourceGetter(logger, urls.ListAdvertisers, restCall); restCall.UriVariables.Add("access_id", 1); getter.GotResource += new ResourceGetter.GotResourceEventHandler(getter_GotResource); getter.GetResources(); SaveResources(); } if (args.HasFlag(Args.Advertisers)) { var getter = new ResourceGetter(logger, urls.ListAdvertisers, restCall); restCall.UriVariables.Add("access_id", 1); getter.GotResource += new ResourceGetter.GotResourceEventHandler(getter_GotResource); getter.GetResources(); SaveResources(); } if (args.HasFlag(Args.CampaignGroups)) { var getter = new ResourceGetter(logger, urls.ListCampaignGroups, restCall); getter.GotResource += new ResourceGetter.GotResourceEventHandler(getter_GotResource); getter.GetResources(); SaveResources(); } } catch (Exception e) { Console.WriteLine(e.InnerException); Console.WriteLine(e.StackTrace); }

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >