Search Results

Search found 1375 results on 55 pages for 'asymptotic complexity'.

Page 5/55 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Time complexity of a recursive algorithm

    - by Passonate Learner
    How can I calculate the time complexity of a recursive algorithm? pow1(x,n) if(n==0){ return 1 } else{ return x * pow1(x, n-1) } pow2(x,n) if(n==0){ return 1 } else if(n is odd integer){ p = pow2(x, (n-1)/2) return x * p * p } else if(n is even integer){ p = pow2(x, n/2) return p * p }

    Read the article

  • Add to SortedSet<T> and its complexity

    - by Andrei Taptunov
    MSDN states the following SortedSet(T).Add Method : If Count is less than the capacity of the internal array, this method is an O(1) operation. Could someone please explain "how so"? I mean when adding new value we need to find a correct place to add a value (comparing it with another values) and internal implementation looks like a "Red-Black tree" with O (log N) insertion complexity.

    Read the article

  • how measure complexity of program

    - by gcc
    how does compiler determine there is run-time error ? is it run the code and then decide whether code executable or not are there any program which are capable to determine complexity of my executable code? are there any code which is for measuring the time when code start to execute up to finish

    Read the article

  • How meaningful is the Big-O time complexity of an algorithm?

    - by james creasy
    Programmers often talk about the time complexity of an algorithm, e.g. O(log n) or O(n^2). Time complexity classifications are made as the input size goes to infinity, but ironically infinite input size in computation is not used. Put another way, the classification of an algorithm is based on a situation that algorithm will never be in: where n = infinity. Also, consider that a polynomial time algorithm where the exponent is huge is just as useless as an exponential time algorithm with tiny base (e.g., 1.00000001^n) is useful. Given this, how much can I rely on the Big-O time complexity to advise choice of an algorithm?

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • Help with password complexity regex

    - by Alex
    I'm using the following regex to validate password complexity: /^.*(?=.{6,12})(?=.*[0-9]{2})(?=.*[A-Z]{2})(?=.*[a-z]{2}).*$/ In a nutshell: 2 lowercase, 2 uppercase, 2 numbers, min length is 6 and max length is 12. It works perfectly, except for the maximum length, when I'm using a minimum length as well. For example: /^.*(?=.{6,})(?=.*[0-9]{2})(?=.*[A-Z]{2})(?=.*[a-z]{2}).*$/ This correctly requires a minimum length of 6! And this: /^.*(?=.{,12})(?=.*[0-9]{2})(?=.*[A-Z]{2})(?=.*[a-z]{2}).*$/ Correctly requires a maximum length of 12. However, when I pair them together as in the first example, it just doesn't work!! What gives? Thanks!

    Read the article

  • How to Check all the nodes in tree view with minimum complexity

    - by Vinni
    I need to check/select all the nodes in a tree view with minimum complexity. My tree view has 3 levels and many nodes in it. below is my code: <asp:TreeView ID="TreeView1" runat="server" DataSourceID="XmlDataSource1" ShowCheckBoxes="All" ShowExpandCollapse="true" <DataBindings> <asp:TreeNodeBinding DataMember="Category" TextField="Name" ValueField="Value" /> <asp:TreeNodeBinding DataMember="LeafCategory" TextField="Name" ValueField="Value" /> <asp:TreeNodeBinding DataMember="ChildCategory" TextField="Name" ValueField="Value" /> <asp:TreeNodeBinding DataMember="SubCategory" TextField="Name" ValueField="Value" /> <asp:TreeNodeBinding DataMember="Categories" TextField="Name" ValueField="Value" /> </DataBindings> </asp:TreeView>

    Read the article

  • complexity of algorithms

    - by davit-datuashvili
    i have question what is complexity of this algorithm public class smax{ public static void main(String[]args){ int b[]=new int[11]; int a[]=new int[]{4,9,2,6,8,7,5}; for (int i=0;i int m=0; while (m int k=a[0]; for (int i=0;i k && b[a[i]]!=1){ b[a[i]]=1; } } m++; } for (int i=0;i for (int j=0;j //result=2 4 5 6 7 8 9 } } ?

    Read the article

  • Throwing cats out of windows

    - by AndrewF
    Imagine you're in a tall building with a cat. The cat can survive a fall out of a low story window, but will die if thrown from a high floor. How can you figure out the longest drop that the cat can survive, using the least number of attempts? Obviously, if you only have one cat, then you can only search linearly. First throw the cat from the first floor. If it survives, throw it from the second. Eventually, after being thrown from floor f, the cat will die. You then know that floor f-1 was the maximal safe floor. But what if you have more than one cat? You can now try some sort of logarithmic search. Let's say that the build has 100 floors and you have two identical cats. If you throw the first cat out of the 50th floor and it dies, then you only have to search 50 floors linearly. You can do even better if you choose a lower floor for your first attempt. Let's say that you choose to tackle the problem 20 floors at a time and that the first fatal floor is #50. In that case, your first cat will survive flights from floors 20 and 40 before dying from floor 60. You just have to check floors 41 through 49 individually. That's a total of 12 attempts, which is much better than the 50 you would need had you attempted to use binary elimination. In general, what's the best strategy and it's worst-case complexity for an n-storied building with 2 cats? What about for n floors and m cats? Assume that all cats are equivalent: they will all survive or die from a fall from a given window. Also, every attempt is independent: if a cat survives a fall, it is completely unharmed. This isn't homework, although I may have solved it for school assignment once. It's just a whimsical problem that popped into my head today and I don't remember the solution. Bonus points if anyone knows the name of this problem or of the solution algorithm.

    Read the article

  • Proving that a function f(n) belongs to a Big-Theta(g(n))

    - by PLS
    Its a exercise that ask to indicate the class Big-Theta(g(n)) the functions belongs to and to prove the assertion. In this case f(n) = (n^2+1)^10 By definition f(n) E Big-Theta(g(n)) <= c1*g(n) < f(n) < c2*g(n), where c1 and c2 are two constants. I know that for this specific f(n) the Big-Theta is g(n^20) but I don't know who to prove it properly. I guess I need to manipulate this inequality but I don't know how

    Read the article

  • what is order notation f(n)=O(g(n))?

    - by Lopa
    2 questions: question 1: under what circumstances would this[O(f(n))=O(k.f(n))] be the most appropriate form of time-complexity analysis? question 2: working from mathematical definition of O notation, show that O(f(n))=O(k.f(n)), for positive constant k.? My view: For the first one I think it is average case and worst case form of time-complexity. am i right? and what else do i write in that? for the second one I think we need to define the function mathematically, so is the answer something like because the multiplication by a constant just corresponds to a readjustment of value of the arbitrary constant 'k' in definition of O.

    Read the article

  • How is schoolbook long division an O(n^2) algorithm?

    - by eSKay
    Premise: This Wikipedia page suggests that the computational complexity of Schoolbook long division is O(n^2). Deduction: Instead of taking "Two n-digit numbers", if I take one n-digit number and one m-digit number, then the complexity would be O(n*m). Contradiction: Suppose you divide 100000000 (n digits) by 1000 (m digits), you get 100000, which takes six steps to arrive at. Now, if you divide 100000000 (n digits) by 10000 (m digits), you get 10000 . Now this takes only five steps. Conclusion: So, it seems that the order of computation should be something like O(n/m). Question: Who is wrong, me or Wikipedia, and where?

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • Ruby on Rails: reducing complexity of parameters in a RESTFul HTTP POST request (multi-model)

    - by randombits
    I'm using cURL to test a RESTFul HTTP web service. The problem is I'm normally submitting a bunch of values normally like this: curl -d "firstname=bob&lastname=smith&age=25&from=kansas&someothermodelattr=val" -H "Content-Type: application/x-www-form-urlencoded" http://mysite/people.xml -i The problem with this is my controller will then have code like this: unless params[:firstname].nil? end unless params[:lastname].nil? end // FINALLY @person = People.new(params[:firstname], params[:lastname], params[:age], params[:from]) etc.. What's the best way to simplify this? My Person model has all the validations it needs. Is there a way (assuming the request has multi-model parameters) that I can just do: @person = People.new(params[:person]) and then the initializer can take care of the rest?

    Read the article

  • c# object initializer complexity. best practice

    - by Andrew Florko
    I was too excited when object initializer appeared in C#. MyClass a = new MyClass(); a.Field1 = Value1; a.Field2 = Value2; can be rewritten shorter: MyClass a = new MyClass { Field1 = Value1, Field2 = Value2 } Object initializer code is more obvious but when properties number come to dozen and some of the assignment deals with nullable values it's hard to debug where the "null reference error" is. Studio shows the whole object initializer as error point. Nowadays I use object initializer for straightforward assignment only for error-free properties. How do you use object initializer for complex assignment or it's a bad practice to use dozen of assigments at all? Thank you in advance!

    Read the article

  • Reusability, testability, code complexity reduction and showing-off-ability programming importance

    - by Andrew Florko
    There are lots of programming and architecture patterns. Patterns allow to make code cleaner, reusable, more testable & at last (but not at least) to feel the follower a real cool developer. How do you rank these considerations for you? What does affect you most when you decide to apply pattern? I wonder how many times code reusability (especially for MVP, MVC patterns) was important? For example DAL library often shared between projects (it's reusable) but how often controllers/views (abstracted via interfaces) are reused?

    Read the article

  • Drupal and Back-End Complexity

    - by Brian
    Currently I am working on a school website, and we are still in the decision-making process of choosing a framework (we know that we're not using Joomla! or hand-coding). Drupal came up as a viable choice, and currently, that is my best bet for the site. However, I have an issue with CMS's in general. I would like to develop a quite complicated and specifically custom-suited back-end application for teachers to interact with individual students, including the design of shared/custom calendars, announcement privileges, etc. I currently have a bit of expertise with HTML, CSS, PHP, MySQL, and I could wrap my head around some JavaScript and AJAX stuff if need-be. However, would such a complicated application work with Drupal (in that I could create it to specifically suite my purposes)?

    Read the article

  • Is it a good idea to use a formula to balance a game's complexity, in order to keep players in constant flow?

    - by user1107412
    I read a lot about Flow theory and its applications to video games, and I got an idea sticking in my mind. If a number of weight values are applied to different parameters of a certain game level (i.e. the size of the level, the number of enemies, their overal strength, the variance in their behavior, etc), then it should be technically possible to find an overal score mechanism for each level in the game. If a constant ratio of complexity increase were empirically defined, for instance 1,3333, or say, the Golden Ratio, would it be a good idea to arrange the levels in such an order that the increase of overal complexity tends to increase that much? Has somebody tried it?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >