Search Results

Search found 1375 results on 55 pages for 'asymptotic complexity'.

Page 6/55 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • What is the minimal licensable source code?

    - by Hernán Eche
    Let's suppose I want to "protect" this code about being used without attribution, patenting it, or through any open source licence... #include<stdio.h> int main (void) { int version=2; printf("\r\n.Hello world, ver:(%d).", version); return 0; } It's a little obvious or just a language definition example.. When a source stop being "trivial, banal, commonplace, obvious", and start to be something that you may claim "rights"? Perhaps it depends on who read it, something that could be great geniality for someone that have never programmed, could be just obvious for an expert. It's easy when watching two sources there are 10000 same lines of code, that's a theft.. but that's not always so obvious. How to measure amount of "ownness", it's about creativity? line numbers? complexity? I can't imagine objetive answers for that, only some patches. For example perhaps the complexity, It's not fair to replace "years of engeneering" with "copy and paste". But is there any objetive index for objetive determination of this subject? (In a funny way I imagine this criterion: If the licence is longer than the code, then there is no owner, just to punish not caring storage space and world resources =P)

    Read the article

  • Efficient Multiplication of Varying-Length #s [Conceptual]

    - by Milan Patel
    Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm. I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this: a * b = a/2 * 2b if a is even a * b = (a-1)/2 * 2b + a if a is odd My pseudocode is: rpa(x, y){ if x is 1 return y if x is even return rpa(x/2, 2y) if x is odd return rpa((x-1)/2, 2y) + y } I have 3 questions: Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically... Can I apply the Master's Theorem to understand the complexity...? a = # subproblems in recursion = 1 (max 1 recursive call across all states) n / b = size of each subproblem = n / 1 - b = 1 (problem doesn't change size...?) f(n^d) = work done outside recursive calls = 1 - d = 0 (the addition when a is odd) a = 1, b^d = 1, a = b^d - complexity is in n^d*log(n) = log(n) this makes sense logically since we are halving the problem at each step, right? What might my professor mean by providing arbitrary length numbers "as strings". Why do that? Many thanks in advance

    Read the article

  • Big O, how do you calculate/approximate it?

    - by Sven
    Most people with a degree in CS will certainly know what Big O stands for. It helps us to measure how (in)efficient an algorithm really is and if you know in what category the problem you are trying to solve lays in you can figure out if it is still possible to squeeze out that little extra performance.* But I'm curious, how do you calculate or approximate the complexity of your algorithms? *: but as they say, don't overdo it, premature optimization is the root of all evil, and optimization without a justified cause should deserve that name as well.

    Read the article

  • Code Analysis In Python

    - by Jerub
    What tools are good to use for code analysis in python? I have a large source repository split across multiple projects, and I would like to be able to run tools across the directories to see details like Cyclomatic Complexity, and perhaps be able to spot errors using static analysis. Ideally, I would like to be able to produce a report about the health of the source code, so we can spot problem areas that need to be addressed.

    Read the article

  • Longest Common Subsequence

    - by tsudot
    Consider 2 sequences X[1..m] and Y[1..n]. The memoization algorithm would compute the LCS in time O(m*n). Is there any better algorithm to find out LCS wrt time? I guess memoization done diagonally can give us O(min(m,n)) time complexity.

    Read the article

  • The limits of parallelism

    - by psihodelia
    Is it possible to solve a problem of O(n!) complexity within a reasonable time given infinite number of processing units and infinite space? The typical example of O(n!) problem is brute-force search: trying all permutations (ordered combinations).

    Read the article

  • Worst Case number of rotations for BST to AVL algorithm?

    - by spacker_lechuck
    I have a basic algorithm below and I know that the worst case input BST is one that has degenerated to a linked list from inserts to only one side. How would I compute the worst case complexity in terms of number of rotations for this BST to AVL conversion algorithm? IF tree is right heavy { IF tree's right subtree is left heavy { Perform Double Left rotation } ELSE { Perform Single Left rotation } } ELSE IF tree is left heavy { IF tree's left subtree is right heavy { Perform Double Right rotation } ELSE { Perform Single Right rotation } }

    Read the article

  • How to educate business managers on the complexity of adding new features? [duplicate]

    - by Derrick Miller
    This question already has an answer here: How to educate business managers on the complexity of adding new features? [duplicate] 3 answers We maintain a web application for a client who demands that new features be added at a breakneck pace. We've done our best to keep up with their demands, and as a result the code base has grown exponentially. There are now so many modules, subsystems, controllers, class libraries, unit tests, APIs, etc. that it's starting to take more time to work through all of the complexity each time we add a new feature. We've also had to pull additional people in on the project to take over things like QA and staging, so the lead developers can focus on developing. Unfortunately, the client is becoming angry that the cost for each new feature is going up. They seem to expect that we can add new features ad infinitum and the cost of each feature will remain linear. I have repeatedly tried to explain to them that it doesn't work that way - that the code base expands in a fractal manner as all these features are added. I've explained that the best way to keep the cost down is to be judicious about which new features are really needed. But, they either don't understand, or they think I'm bullshitting them. They just sort of roll their eyes and get angry. They're all completely non-technical, and have no idea what does into writing software. Is there a way that I can explain this using business language, that might help them understand better? Are there any visualizations out there, that illustrate the growth of a code base over time? Any other suggestions on dealing with this client?

    Read the article

  • binary search and trio

    - by user121196
    given a large list of alphabetically sorted words in a file,I need to write a program that, given a word x, determines if x is in the list. priorties: 1. speed. 2. memory I already know I can use (n is number of words, m is average length of the words) 1. a tri, time is O(log(n)), space(best case) is O(log(n*m)), space(worst case) is O(n*m). 2. load the complete list into memory, then binary search, time is O(log(n)), space is O(n*m) I'm not sure about the complexity on tri, please correct me if they are wrong. Also are there other good approaches?

    Read the article

  • Implementation of a distance matrix of a binary tree that is given in the adjacency-list representation

    - by Denise Giubilei
    Given this problem I need an O(n²) implementation of this algorithm: "Let v be an arbitrary leaf in a binary tree T, and w be the only vertex connected to v. If we remove v, we are left with a smaller tree T'. The distance between v and any vertex u in T' is 1 plus the distance between w and u." This is a problem and solution of a Manber's exercise (Exercise 5.12 from U. Manber, Algorithms: A Creative Approach, Addison-Wesley (1989).). The thing is that I can't deal properly with the adjacency-list representation so that the implementation of this algorithm results in a real O(n²) complexity. Any ideas of how the implementation should be? Thanks.

    Read the article

  • How to get it working in O(n)?

    - by evermean
    I came across an interview task/question that really got me thinking ... so here it goes: You have an array A[N] of N numbers. You have to compose an array Output[N] such that Output[i] will be equal to multiplication of all the elements of A[N] except A[i]. For example Output[0] will be multiplication of A[1] to A[N-1] and Output[1] will be multiplication of A[0] and from A[2] to A[N-1]. Solve it without division operator and in O(n). I really tried to come up with a solution but I always end up with a complexity of O(n^2). Perhaps the is anyone smarter than me who can tell me an algorithm that works in O(n) or at least give me a hint...

    Read the article

  • fast similarity detection

    - by reinierpost
    I have a large collection of objects and I need to figure out the similarities between them. To be exact: given two objects I can compute their dissimilarity as a number, a metric - higher values mean less similarity and 0 means the objects have identical contents. The cost of computing this number is proportional to the size of the smaller object (each object has a given size). I need the ability to quickly find, given an object, the set of objects similar to it. To be exact: I need to produce a data structure that maps any object o to the set of objects no more dissimilar to o than d, for some dissimilarity value d, such that listing the objects in the set takes no more time than if they were in an array or linked list (and perhaps they actually are). Typically, the set will be very much smaller than the total number of objects, so it is really worthwhile to perform this computation. It's good enough if the data structure assumes a fixed d, but if it works for an arbitrary d, even better. Have you seen this problem before, or something similar to it? What is a good solution? To be exact: a straightforward solution involves computing the dissimilarities between all pairs of objects, but this is slow - O(n2) where n is the number of objects. Is there a general solution with lower complexity?

    Read the article

  • Why is my quick sort so slow?

    - by user513075
    Hello, I am practicing writing sorting algorithms as part of some interview preparation, and I am wondering if anybody can help me spot why this quick sort is not very fast? It appears to have the correct runtime complexity, but it is slower than my merge sort by a constant factor of about 2. I would also appreciate any comments that would improve my code that don't necessarily answer the question. Thanks a lot for your help! Please don't hesitate to let me know if I have made any etiquette mistakes. This is my first question here. private class QuickSort implements Sort { @Override public int[] sortItems(int[] ts) { List<Integer> toSort = new ArrayList<Integer>(); for (int i : ts) { toSort.add(i); } toSort = partition(toSort); int[] ret = new int[ts.length]; for (int i = 0; i < toSort.size(); i++) { ret[i] = toSort.get(i); } return ret; } private List<Integer> partition(List<Integer> toSort) { if (toSort.size() <= 1) return toSort; int pivotIndex = myRandom.nextInt(toSort.size()); Integer pivot = toSort.get(pivotIndex); toSort.remove(pivotIndex); List<Integer> left = new ArrayList<Integer>(); List<Integer> right = new ArrayList<Integer>(); for (int i : toSort) { if (i > pivot) right.add(i); else left.add(i); } left = partition(left); right = partition(right); left.add(pivot); left.addAll(right); return left; } }

    Read the article

  • How can I find the common ancestor of two nodes in a binary tree?

    - by Siddhant
    The Binary Tree here is not a Binary Search Tree. Its just a Binary Tree. The structure could be taken as - struct node { int data; struct node *left; struct node *right; }; The maximum solution I could work out with a friend was something of this sort - Consider this binary tree (from http://lcm.csa.iisc.ernet.in/dsa/node87.html) : The inorder traversal yields - 8, 4, 9, 2, 5, 1, 6, 3, 7 And the postorder traversal yields - 8, 9, 4, 5, 2, 6, 7, 3, 1 So for instance, if we want to find the common ancestor of nodes 8 and 5, then we make a list of all the nodes which are between 8 and 5 in the inorder tree traversal, which in this case happens to be [4, 9, 2]. Then we check which node in this list appears last in the postorder traversal, which is 2. Hence the common ancestor for 8 and 5 is 2. The complexity for this algorithm, I believe is O(n) (O(n) for inorder/postorder traversals, the rest of the steps again being O(n) since they are nothing more than simple iterations in arrays). But there is a strong chance that this is wrong. :-) But this is a very crude approach, and I'm not sure if it breaks down for some case. Is there any other (possibly more optimal) solution to this problem?

    Read the article

  • How do I avoid the complexity concerns of frameworks while keeping my team marketable?

    - by Desolate Planet
    When deciding upon how to design a software project with my colleagues, most suggestions tend to be for using specific frameworks "because it's popular in the job market" or "that's the framework that gets recruiters on the phone," and never what I'm looking for which is, "because it's a good fit for the project as it makes the system more adaptive to future changes and makes life easier for developers." I didn't start looking at projects in this way until I started reading up on domain-driven design. I've found that the actual domain is hidden deep under the frameworks used and it's hard to learn the business processes that have been implemented by the software product. Is there a way to marry the two competing goals: getting exposure as a development team while still being able to avoid complexity? Are frameworks that compromise, or are there other solutions out there?

    Read the article

  • When decomposing a large function, how can I avoid the complexity from the extra subfunctions?

    - by missingno
    Say I have a large function like the following: function do_lots_of_stuff(){ { //subpart 1 ... } ... { //subpart N ... } } a common pattern is to decompose it into subfunctions function do_lots_of_stuff(){ subpart_1(...) subpart_2(...) ... subpart_N(...) } I usually find that decomposition has two main advantages: The decomposed function becomes much smaller. This can help people read it without getting lost in the details. Parameters have to be explicitly passed to the underlying subfunctions, instead of being implicitly available by just being in scope. This can help readability and modularity in some situations. However, I also find that decomposition has some disadvantages: There are no guarantees that the subfunctions "belong" to do_lots_of_stuff so there is nothing stopping someone from accidentally calling them from a wrong place. A module's complexity grows quadratically with the number of functions we add to it. (There are more possible ways for things to call each other) Therefore: Are there useful convention or coding styles that help me balance the pros and cons of function decomposition or should I just use an editor with code folding and call it a day? EDIT: This problem also applies to functional code (although in a less pressing manner). For example, in a functional setting we would have the subparts be returning values that are combined in the end and the decomposition problem of having lots of subfunctions being able to use each other is still present. We can't always assume that the problem domain will be able to be modeled on just some small simple types with just a few highly orthogonal functions. There will always be complicated algorithms or long lists of business rules that we still want to correctly be able to deal with. function do_lots_of_stuff(){ p1 = subpart_1() p2 = subpart_2() pN = subpart_N() return assembleStuff(p1, p2, ..., pN) }

    Read the article

  • What is the difference between these two nloglog(n) sorting algorithms? (Andersson et al., 1995 vs.

    - by Yktula
    Swanepoel's comment here lead me to this paper. Then, searching for an implementation in C, I came across this, which referenced another paper on an algorithm described here. Both papers describe integer sorting algorithms that run in O(nloglog(n)) time. What is the difference between the two? Have there been any more recent findings about this topic? Andersson et al., 1995 Han, 2004

    Read the article

  • Big 0 theta notation

    - by niggersak
    Can some pls help with the solution Use big-O notation to classify the traditional grade school algorithms for addition and multiplication. That is, if asked to add two numbers each having N digits, how many individual additions must be performed? If asked to multiply two N-digit numbers, how many individual multiplications are required? Suppose f is a function that returns the result of reversing the string of symbols given as its input, and g is a function that returns the concatenation of the two strings given as its input. If x is the string hrwa, what is returned by g(f(x),x)? Explain your answer - don't just provide the result!

    Read the article

  • Best jQuery/Prototype book for complex ajax?

    - by Burton Kent
    I've been working on a complex app with one main dashboard. I don't particularly like the design because it tries to do too much on one page. So the lead developer thought it would be a good idea to use ajax - because the page is so big. Refreshing part of it is far faster than loading it again. Problem is there's several ways data can be used. Adding items Editing rows Performing actions on selected rows (selected using a checkbox) Changing single items (like location, phone) My problem is making GENERALIZABLE ajax code that can operate on the data in a div, using class names to assemble the proper information for the ajax call. I did pretty well, but can't help but want to see if there's a better way to do it.

    Read the article

  • Amazing families of algorithms over implicit graphs

    - by Diego de Estrada
    Dynamic programming is, almost by definition, to find a shortest/longest path on an implicit dag. Every DP algorithm just does this. An Holographic algorithm can be loosely described as something that counts perfect matchings in implicit planar graphs. So, my question is: are there any other families of algorithms that use well-known algorithms over implicit graphs to achieve a considerable speedup?

    Read the article

  • What is an XYZ-complete problem?

    - by TheMachineCharmer
    EDIT: Diagram: http://www.cs.umass.edu/~immerman/complexity_theory.html There must be some meaning to the word "complete" its used every now and then. Look at the diagram. I tried reading previous posts about NP- My question is what does the word "COMPLETE" mean? Why is it there? What is its significance? N- Non-deterministic - makes sense' P- Polynomial - makes sense but the "COMPLETE" is still a mystery for me.

    Read the article

  • How to calculate order (big O) for more complex algorithms (ie quicksort)

    - by bangoker
    I know there are quite a bunch of questions about big O notation, I have already checked Plain english explanation of Big O , Big O, how do you calculate/approximate it?, and Big O Notation Homework--Code Fragment Algorithm Analysis?, to name a few. I know by "intuition" how to calculate it for n, n^2, n! and so, however I am completely lost on how to calculate it for algorithms that are log n , n log n, n log log n and so. What I mean is, I know that Quick Sort is n log n (on average).. but, why? Same thing for merge/comb, etc. Could anybody explain me in a not to math-y way how do you calculate this? The main reason is that Im about to have a big interview and I'm pretty sure they'll ask for this kind of stuff. I have researched for a few days now, and everybody seem to have either an explanation of why bubble sort is n^2 or the (for me) unreadable explanation a la wikipedia Thanks!

    Read the article

  • The Immerman-Szelepcsenyi Theorem

    - by Daniel Lorch
    In the Immerman-Szelepcsenyi Theorem, two algorithms are specified that use non-determinisim. There is a rather lengthy algorithm using "inductive counting", which determines the number of reachable configurations for a given non-deterministic turing machine. The algorithm looks like this: Let m_{i+1}=0 For all configurations C Let b=0, r=0 For all configurations D Guess a path from I to D in at most i steps If found Let r=r+1 If D=C or D goes to C in 1 step Let b=1 If r<m_i halt and reject Let m_{i+1}=m_{i+1}+b I is the starting configuration. m_i is the number of configurations reachable from the starting configuration in i steps. This algorithm only calculates the "next step", i.e. m_i+1 from m_i. This seems pretty reasonable, but since we have nondeterminisim, why don't we just write: Let m_i = 0 For all configurations C Guess a path from I to C in at most i steps If found m_i = m_i + 1 What is wrong with this algorithm? I am using nondeterminism to guess a path from I to C, and I verify reachability I am iterating through the list of ALL configurations, so I am sure to not miss any configuration I respect space bounds I can generate a certificate (the list of reachable configs) I believe I have a misunderstanding of the "power" of non-determinisim, but I can't figure out where to look next. I am stuck on this for quite a while and I would really appreciate any help.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >