Search Results

Search found 1366 results on 55 pages for 'complexity'.

Page 5/55 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • how measure complexity of program

    - by gcc
    how does compiler determine there is run-time error ? is it run the code and then decide whether code executable or not are there any program which are capable to determine complexity of my executable code? are there any code which is for measuring the time when code start to execute up to finish

    Read the article

  • How meaningful is the Big-O time complexity of an algorithm?

    - by james creasy
    Programmers often talk about the time complexity of an algorithm, e.g. O(log n) or O(n^2). Time complexity classifications are made as the input size goes to infinity, but ironically infinite input size in computation is not used. Put another way, the classification of an algorithm is based on a situation that algorithm will never be in: where n = infinity. Also, consider that a polynomial time algorithm where the exponent is huge is just as useless as an exponential time algorithm with tiny base (e.g., 1.00000001^n) is useful. Given this, how much can I rely on the Big-O time complexity to advise choice of an algorithm?

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • Help with password complexity regex

    - by Alex
    I'm using the following regex to validate password complexity: /^.*(?=.{6,12})(?=.*[0-9]{2})(?=.*[A-Z]{2})(?=.*[a-z]{2}).*$/ In a nutshell: 2 lowercase, 2 uppercase, 2 numbers, min length is 6 and max length is 12. It works perfectly, except for the maximum length, when I'm using a minimum length as well. For example: /^.*(?=.{6,})(?=.*[0-9]{2})(?=.*[A-Z]{2})(?=.*[a-z]{2}).*$/ This correctly requires a minimum length of 6! And this: /^.*(?=.{,12})(?=.*[0-9]{2})(?=.*[A-Z]{2})(?=.*[a-z]{2}).*$/ Correctly requires a maximum length of 12. However, when I pair them together as in the first example, it just doesn't work!! What gives? Thanks!

    Read the article

  • How to Check all the nodes in tree view with minimum complexity

    - by Vinni
    I need to check/select all the nodes in a tree view with minimum complexity. My tree view has 3 levels and many nodes in it. below is my code: <asp:TreeView ID="TreeView1" runat="server" DataSourceID="XmlDataSource1" ShowCheckBoxes="All" ShowExpandCollapse="true" <DataBindings> <asp:TreeNodeBinding DataMember="Category" TextField="Name" ValueField="Value" /> <asp:TreeNodeBinding DataMember="LeafCategory" TextField="Name" ValueField="Value" /> <asp:TreeNodeBinding DataMember="ChildCategory" TextField="Name" ValueField="Value" /> <asp:TreeNodeBinding DataMember="SubCategory" TextField="Name" ValueField="Value" /> <asp:TreeNodeBinding DataMember="Categories" TextField="Name" ValueField="Value" /> </DataBindings> </asp:TreeView>

    Read the article

  • complexity of algorithms

    - by davit-datuashvili
    i have question what is complexity of this algorithm public class smax{ public static void main(String[]args){ int b[]=new int[11]; int a[]=new int[]{4,9,2,6,8,7,5}; for (int i=0;i int m=0; while (m int k=a[0]; for (int i=0;i k && b[a[i]]!=1){ b[a[i]]=1; } } m++; } for (int i=0;i for (int j=0;j //result=2 4 5 6 7 8 9 } } ?

    Read the article

  • what is order notation f(n)=O(g(n))?

    - by Lopa
    2 questions: question 1: under what circumstances would this[O(f(n))=O(k.f(n))] be the most appropriate form of time-complexity analysis? question 2: working from mathematical definition of O notation, show that O(f(n))=O(k.f(n)), for positive constant k.? My view: For the first one I think it is average case and worst case form of time-complexity. am i right? and what else do i write in that? for the second one I think we need to define the function mathematically, so is the answer something like because the multiplication by a constant just corresponds to a readjustment of value of the arbitrary constant 'k' in definition of O.

    Read the article

  • How is schoolbook long division an O(n^2) algorithm?

    - by eSKay
    Premise: This Wikipedia page suggests that the computational complexity of Schoolbook long division is O(n^2). Deduction: Instead of taking "Two n-digit numbers", if I take one n-digit number and one m-digit number, then the complexity would be O(n*m). Contradiction: Suppose you divide 100000000 (n digits) by 1000 (m digits), you get 100000, which takes six steps to arrive at. Now, if you divide 100000000 (n digits) by 10000 (m digits), you get 10000 . Now this takes only five steps. Conclusion: So, it seems that the order of computation should be something like O(n/m). Question: Who is wrong, me or Wikipedia, and where?

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • Ruby on Rails: reducing complexity of parameters in a RESTFul HTTP POST request (multi-model)

    - by randombits
    I'm using cURL to test a RESTFul HTTP web service. The problem is I'm normally submitting a bunch of values normally like this: curl -d "firstname=bob&lastname=smith&age=25&from=kansas&someothermodelattr=val" -H "Content-Type: application/x-www-form-urlencoded" http://mysite/people.xml -i The problem with this is my controller will then have code like this: unless params[:firstname].nil? end unless params[:lastname].nil? end // FINALLY @person = People.new(params[:firstname], params[:lastname], params[:age], params[:from]) etc.. What's the best way to simplify this? My Person model has all the validations it needs. Is there a way (assuming the request has multi-model parameters) that I can just do: @person = People.new(params[:person]) and then the initializer can take care of the rest?

    Read the article

  • c# object initializer complexity. best practice

    - by Andrew Florko
    I was too excited when object initializer appeared in C#. MyClass a = new MyClass(); a.Field1 = Value1; a.Field2 = Value2; can be rewritten shorter: MyClass a = new MyClass { Field1 = Value1, Field2 = Value2 } Object initializer code is more obvious but when properties number come to dozen and some of the assignment deals with nullable values it's hard to debug where the "null reference error" is. Studio shows the whole object initializer as error point. Nowadays I use object initializer for straightforward assignment only for error-free properties. How do you use object initializer for complex assignment or it's a bad practice to use dozen of assigments at all? Thank you in advance!

    Read the article

  • Reusability, testability, code complexity reduction and showing-off-ability programming importance

    - by Andrew Florko
    There are lots of programming and architecture patterns. Patterns allow to make code cleaner, reusable, more testable & at last (but not at least) to feel the follower a real cool developer. How do you rank these considerations for you? What does affect you most when you decide to apply pattern? I wonder how many times code reusability (especially for MVP, MVC patterns) was important? For example DAL library often shared between projects (it's reusable) but how often controllers/views (abstracted via interfaces) are reused?

    Read the article

  • Drupal and Back-End Complexity

    - by Brian
    Currently I am working on a school website, and we are still in the decision-making process of choosing a framework (we know that we're not using Joomla! or hand-coding). Drupal came up as a viable choice, and currently, that is my best bet for the site. However, I have an issue with CMS's in general. I would like to develop a quite complicated and specifically custom-suited back-end application for teachers to interact with individual students, including the design of shared/custom calendars, announcement privileges, etc. I currently have a bit of expertise with HTML, CSS, PHP, MySQL, and I could wrap my head around some JavaScript and AJAX stuff if need-be. However, would such a complicated application work with Drupal (in that I could create it to specifically suite my purposes)?

    Read the article

  • Is it a good idea to use a formula to balance a game's complexity, in order to keep players in constant flow?

    - by user1107412
    I read a lot about Flow theory and its applications to video games, and I got an idea sticking in my mind. If a number of weight values are applied to different parameters of a certain game level (i.e. the size of the level, the number of enemies, their overal strength, the variance in their behavior, etc), then it should be technically possible to find an overal score mechanism for each level in the game. If a constant ratio of complexity increase were empirically defined, for instance 1,3333, or say, the Golden Ratio, would it be a good idea to arrange the levels in such an order that the increase of overal complexity tends to increase that much? Has somebody tried it?

    Read the article

  • What is the minimal licensable source code?

    - by Hernán Eche
    Let's suppose I want to "protect" this code about being used without attribution, patenting it, or through any open source licence... #include<stdio.h> int main (void) { int version=2; printf("\r\n.Hello world, ver:(%d).", version); return 0; } It's a little obvious or just a language definition example.. When a source stop being "trivial, banal, commonplace, obvious", and start to be something that you may claim "rights"? Perhaps it depends on who read it, something that could be great geniality for someone that have never programmed, could be just obvious for an expert. It's easy when watching two sources there are 10000 same lines of code, that's a theft.. but that's not always so obvious. How to measure amount of "ownness", it's about creativity? line numbers? complexity? I can't imagine objetive answers for that, only some patches. For example perhaps the complexity, It's not fair to replace "years of engeneering" with "copy and paste". But is there any objetive index for objetive determination of this subject? (In a funny way I imagine this criterion: If the licence is longer than the code, then there is no owner, just to punish not caring storage space and world resources =P)

    Read the article

  • Efficient Multiplication of Varying-Length #s [Conceptual]

    - by Milan Patel
    Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm. I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this: a * b = a/2 * 2b if a is even a * b = (a-1)/2 * 2b + a if a is odd My pseudocode is: rpa(x, y){ if x is 1 return y if x is even return rpa(x/2, 2y) if x is odd return rpa((x-1)/2, 2y) + y } I have 3 questions: Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically... Can I apply the Master's Theorem to understand the complexity...? a = # subproblems in recursion = 1 (max 1 recursive call across all states) n / b = size of each subproblem = n / 1 - b = 1 (problem doesn't change size...?) f(n^d) = work done outside recursive calls = 1 - d = 0 (the addition when a is odd) a = 1, b^d = 1, a = b^d - complexity is in n^d*log(n) = log(n) this makes sense logically since we are halving the problem at each step, right? What might my professor mean by providing arbitrary length numbers "as strings". Why do that? Many thanks in advance

    Read the article

  • Big O, how do you calculate/approximate it?

    - by Sven
    Most people with a degree in CS will certainly know what Big O stands for. It helps us to measure how (in)efficient an algorithm really is and if you know in what category the problem you are trying to solve lays in you can figure out if it is still possible to squeeze out that little extra performance.* But I'm curious, how do you calculate or approximate the complexity of your algorithms? *: but as they say, don't overdo it, premature optimization is the root of all evil, and optimization without a justified cause should deserve that name as well.

    Read the article

  • Code Analysis In Python

    - by Jerub
    What tools are good to use for code analysis in python? I have a large source repository split across multiple projects, and I would like to be able to run tools across the directories to see details like Cyclomatic Complexity, and perhaps be able to spot errors using static analysis. Ideally, I would like to be able to produce a report about the health of the source code, so we can spot problem areas that need to be addressed.

    Read the article

  • Longest Common Subsequence

    - by tsudot
    Consider 2 sequences X[1..m] and Y[1..n]. The memoization algorithm would compute the LCS in time O(m*n). Is there any better algorithm to find out LCS wrt time? I guess memoization done diagonally can give us O(min(m,n)) time complexity.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >