Search Results

Search found 9449 results on 378 pages for 'big marc'.

Page 24/378 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Unix: millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • Find if there is an element repeating itself n/k times

    - by gleb-pendler
    You have an array size n and a constant k (whatever) You can assume the the array is of int type (although it could be of any type) Describe an algorithm that finds if there is an element(s) that repeats itself at least n/k times... if there is return one. Do so in linear time (O(n)) The catch: do this algorithm (or even pseudo-code) using constant memory and running over the array only twice

    Read the article

  • minimum L sum in a mxn matrix - 2

    - by hilal
    Here is my first question about maximum L sum and here is different and hard version of it. Problem : Given a mxn *positive* integer matrix find the minimum L sum from 0th row to the m'th row . L(4 item) likes chess horse move Example : M = 3x3 0 1 2 1 3 2 4 2 1 Possible L moves are : (0 1 2 2), (0 1 3 2) (0 1 4 2) We should go from 0th row to the 3th row with minimum sum I solved this with dynamic-programming and here is my algorithm : 1. Take a mxn another Minimum L Moves Sum array and copy the first row of main matrix. I call it (MLMS) 2. start from first cell and look the up L moves and calculate it 3. insert it in MLMS if it is less than exists value 4. Do step 2. until m'th row 5. Choose the minimum sum in the m'th row Let me explain on my example step by step: M[ 0 ][ 0 ] sum(L1 = (0, 1, 2, 2)) = 5 ; sum(L2 = (0,1,3,2)) = 6; so MLMS[ 0 ][ 1 ] = 6 sum(L3 = (0, 1, 3, 2)) = 6 ; sum(L4 = (0,1,4,2)) = 7; so MLMS[ 2 ][ 1 ] = 6 M[ 0 ][ 1 ] sum(L5 = (1, 0, 1, 4)) = 6; sum(L6 = (1,3,2,4)) = 10; so MLMS[ 2 ][ 2 ] = 6 ... the last MSLS is : 0 1 2 4 3 6 6 6 6 Which means 6 is the minimum L sum that can be reach from 0 to the m. I think it is O(8*(m-1)*n) = O(m*n). Is there any optimal solution or dynamic-programming algorithms fit this problem? Thanks, sorry for long question

    Read the article

  • Challenging question find if there is an element repeating himself n/k times

    - by gleb-pendler
    here how it's goes: You have an array size n and a constant k (whatever) you can assume the the array of int type tho it kind be of whatever type but just for the clearane let assume it's an integer. Describe an algorithm that finds if there is an element/s that repeat itself at least n/k times... if there is return one - do it in linear time running O(n) Imortent: now the catch do this algorithm or even pseuo-code using a constant usage of memory and running over the array only TWICE!!!

    Read the article

  • O(log N) == O(1) - Why not?

    - by phoku
    Whenever I consider algorithms/data structures I tend to replace the log(N) parts by constants. Oh, I know log(N) diverges - but does it matter in real world applications? log(infinity) < 100 for all practical purposes. I am really curious for real world examples where this doesn't hold. To clarify: I understand O(f(N)) I am curious about real world examples where the asymptotic behaviour matters more than the constants of the actual performance. If log(N) can be replaced by a constant it still can be replaced by a constant in O( N log N). This question is for the sake of (a) entertainment and (b) to gather arguments to use if I run (again) into a controversy about the performance of a design.

    Read the article

  • Time complexity O() of isPalindrome()

    - by Aran
    I have this method, isPalindrome(), and I am trying to find the time complexity of it, and also rewrite the code more efficiently. boolean isPalindrome(String s) { boolean bP = true; for(int i=0; i<s.length(); i++) { if(s.charAt(i) != s.charAt(s.length()-i-1)) { bP = false; } } return bP; } Now I know this code checks the string's characters to see whether it is the same as the one before it and if it is then it doesn't change bP. And I think I know that the operations are s.length(), s.charAt(i) and s.charAt(s.length()-i-!)). Making the time-complexity O(N + 3), I think? This correct, if not what is it and how is that figured out. Also to make this more efficient, would it be good to store the character in temporary strings?

    Read the article

  • Linear time and quadratic time

    - by jasonline
    I'm just not sure... If you have a code that can be executed in either of the following complexities: (1) A sequence of O(n), like for example: two O(n) in sequence (2) O(n²) The preferred version would be the one that can be executed in linear time. Would there be a time such that the sequence of O(n) would be too much and that O(n²) would be preferred? In other words, is the statement C x O(n) < O(n²) always true for any constant C? If no, what are the factors that would affect the condition such that it would be better to choose the O(n²) complexity?

    Read the article

  • Python: (sampling with replacement): efficient algorithm to extract the set of UNIQUE N-tuples from a set

    - by Homunculus Reticulli
    I have a set of items, from which I want to select DISSIMILAR tuples (more on the definition of dissimilar touples later). The set could contain potentially several thousand items, although typically, it would contain only a few hundreds. I am trying to write a generic algorithm that will allow me to select N items to form an N-tuple, from the original set. The new set of selected N-tuples should be DISSIMILAR. A N-tuple A is said to be DISSIMILAR to another N-tuple B if and only if: Every pair (2-tuple) that occurs in A DOES NOT appear in B Note: For this algorithm, A 2-tuple (pair) is considered SIMILAR/IDENTICAL if it contains the same elements, i.e. (x,y) is considered the same as (y,x). This is a (possible variation on the) classic Urn Problem. A trivial (pseudocode) implementation of this algorithm would be something along the lines of def fetch_unique_tuples(original_set, tuple_size): while True: # randomly select [tuple_size] items from the set to create first set # create a key or hash from the N elements and store in a set # store selected N-tuple in a container if end_condition_met: break I don't think this is the most efficient way of doing this - and though I am no algorithm theorist, I suspect that the time for this algorithm to run is NOT O(n) - in fact, its probably more likely to be O(n!). I am wondering if there is a more efficient way of implementing such an algo, and preferably, reducing the time to O(n). Actually, as Mark Byers pointed out there is a second variable m, which is the size of the number of elements being selected. This (i.e. m) will typically be between 2 and 5. Regarding examples, here would be a typical (albeit shortened) example: original_list = ['CAGG', 'CTTC', 'ACCT', 'TGCA', 'CCTG', 'CAAA', 'TGCC', 'ACTT', 'TAAT', 'CTTG', 'CGGC', 'GGCC', 'TCCT', 'ATCC', 'ACAG', 'TGAA', 'TTTG', 'ACAA', 'TGTC', 'TGGA', 'CTGC', 'GCTC', 'AGGA', 'TGCT', 'GCGC', 'GCGG', 'AAAG', 'GCTG', 'GCCG', 'ACCA', 'CTCC', 'CACG', 'CATA', 'GGGA', 'CGAG', 'CCCC', 'GGTG', 'AAGT', 'CCAC', 'AACA', 'AATA', 'CGAC', 'GGAA', 'TACC', 'AGTT', 'GTGG', 'CGCA', 'GGGG', 'GAGA', 'AGCC', 'ACCG', 'CCAT', 'AGAC', 'GGGT', 'CAGC', 'GATG', 'TTCG'] Select 3-tuples from the original list should produce a list (or set) similar to: [('CAGG', 'CTTC', 'ACCT') ('CAGG', 'TGCA', 'CCTG') ('CAGG', 'CAAA', 'TGCC') ('CAGG', 'ACTT', 'ACCT') ('CAGG', 'CTTG', 'CGGC') .... ('CTTC', 'TGCA', 'CAAA') ] [[Edit]] Actually, in constructing the example output, I have realized that the earlier definition I gave for UNIQUENESS was incorrect. I have updated my definition and have introduced a new metric of DISSIMILARITY instead, as a result of this finding.

    Read the article

  • The limits of parallelism

    - by psihodelia
    Is it possible to solve a problem of O(n!) complexity within a reasonable time given infinite number of processing units and infinite space? The typical example of O(n!) problem is brute-force search: trying all permutations (ordered combinations).

    Read the article

  • Given a number N, find the number of ways to write it as a sum of two or more consecutive integers

    - by hilal
    Here is the problem (Given a number N, find the number of ways to write it as a sum of two or more consecutive integers) and example 15 = 7+8, 1+2+3+4+5, 4+5+6 I solved with math like that : a + (a + 1) + (a + 2) + (a + 3) + ... + (a + k) = N (k + 1)*a + (1 + 2 + 3 + ... + k) = N (k + 1)a + k(k+1)/2 = N (k + 1)*(2*a + k)/2 = N Then check that if N divisible by (k+1) and (2*a+k) then I can find answer in O(N) time Here is my question how can you solve this by dynamic-programming ? and what is the complexity (O) ? P.S : excuse me, if it is a duplicate question. I searched but I can find

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • Python: (sampling with replacement): efficient algorithm to extract the set of DISSIMILAR N-tuples from a set

    - by Homunculus Reticulli
    I have a set of items, from which I want to select DISSIMILAR tuples (more on the definition of dissimilar touples later). The set could contain potentially several thousand items, although typically, it would contain only a few hundreds. I am trying to write a generic algorithm that will allow me to select N items to form an N-tuple, from the original set. The new set of selected N-tuples should be DISSIMILAR. A N-tuple A is said to be DISSIMILAR to another N-tuple B if and only if: Every pair (2-tuple) that occurs in A DOES NOT appear in B Note: For this algorithm, A 2-tuple (pair) is considered SIMILAR/IDENTICAL if it contains the same elements, i.e. (x,y) is considered the same as (y,x). This is a (possible variation on the) classic Urn Problem. A trivial (pseudocode) implementation of this algorithm would be something along the lines of def fetch_unique_tuples(original_set, tuple_size): while True: # randomly select [tuple_size] items from the set to create first set # create a key or hash from the N elements and store in a set # store selected N-tuple in a container if end_condition_met: break I don't think this is the most efficient way of doing this - and though I am no algorithm theorist, I suspect that the time for this algorithm to run is NOT O(n) - in fact, its probably more likely to be O(n!). I am wondering if there is a more efficient way of implementing such an algo, and preferably, reducing the time to O(n). Actually, as Mark Byers pointed out there is a second variable m, which is the size of the number of elements being selected. This (i.e. m) will typically be between 2 and 5. Regarding examples, here would be a typical (albeit shortened) example: original_list = ['CAGG', 'CTTC', 'ACCT', 'TGCA', 'CCTG', 'CAAA', 'TGCC', 'ACTT', 'TAAT', 'CTTG', 'CGGC', 'GGCC', 'TCCT', 'ATCC', 'ACAG', 'TGAA', 'TTTG', 'ACAA', 'TGTC', 'TGGA', 'CTGC', 'GCTC', 'AGGA', 'TGCT', 'GCGC', 'GCGG', 'AAAG', 'GCTG', 'GCCG', 'ACCA', 'CTCC', 'CACG', 'CATA', 'GGGA', 'CGAG', 'CCCC', 'GGTG', 'AAGT', 'CCAC', 'AACA', 'AATA', 'CGAC', 'GGAA', 'TACC', 'AGTT', 'GTGG', 'CGCA', 'GGGG', 'GAGA', 'AGCC', 'ACCG', 'CCAT', 'AGAC', 'GGGT', 'CAGC', 'GATG', 'TTCG'] # Select 3-tuples from the original list should produce a list (or set) similar to: [('CAGG', 'CTTC', 'ACCT') ('CAGG', 'TGCA', 'CCTG') ('CAGG', 'CAAA', 'TGCC') ('CAGG', 'ACTT', 'ACCT') ('CAGG', 'CTTG', 'CGGC') .... ('CTTC', 'TGCA', 'CAAA') ] [[Edit]] Actually, in constructing the example output, I have realized that the earlier definition I gave for UNIQUENESS was incorrect. I have updated my definition and have introduced a new metric of DISSIMILARITY instead, as a result of this finding.

    Read the article

  • Algorithm to determine if array contains n...n+m?

    - by Kyle Cronin
    I saw this question on Reddit, and there were no positive solutions presented, and I thought it would be a perfect question to ask here. This was in a thread about interview questions: Write a method that takes an int array of size m, and returns (True/False) if the array consists of the numbers n...n+m-1, all numbers in that range and only numbers in that range. The array is not guaranteed to be sorted. (For instance, {2,3,4} would return true. {1,3,1} would return false, {1,2,4} would return false. The problem I had with this one is that my interviewer kept asking me to optimize (faster O(n), less memory, etc), to the point where he claimed you could do it in one pass of the array using a constant amount of memory. Never figured that one out. Along with your solutions please indicate if they assume that the array contains unique items. Also indicate if your solution assumes the sequence starts at 1. (I've modified the question slightly to allow cases where it goes 2, 3, 4...) edit: I am now of the opinion that there does not exist a linear in time and constant in space algorithm that handles duplicates. Can anyone verify this? The duplicate problem boils down to testing to see if the array contains duplicates in O(n) time, O(1) space. If this can be done you can simply test first and if there are no duplicates run the algorithms posted. So can you test for dupes in O(n) time O(1) space?

    Read the article

  • Millionth number in the serie 2 3 4 6 9 13 19 28 42 63 ... ?

    - by HH
    It takes about minute to achieve 3000 in my comp but I need to know the millionth number in the serie. The definition is recursive so I cannot see any shortcuts except to calculate everything before the millionth number. How can you fast calculate millionth number in the serie? Serie Def n_{i+1} = \floor{ 3/2 * n_{i} } and n_{0}=2. Interestingly, only one site list the serie according to Goolge: this one. Too slow Bash code #!/bin/bash function serie { n=$( echo "3/2*$n" | bc -l | tr '\n' ' ' | sed -e 's@\\@@g' -e 's@ @@g' ); # bc gives \ at very large numbers, sed-tr for it n=$( echo $n/1 | bc ) #DUMMY FLOOR func } n=2 nth=1 while [ true ]; #$nth -lt 500 ]; do serie $n # n gets new value in the function throught global value echo $nth $n nth=$( echo $nth + 1 | bc ) #n++ done

    Read the article

  • Can hash tables really be O(1)

    - by drawnonward
    It seems to be common knowledge that hash tables can achieve O(1) but that has never made sense to me. Can someone please explain it? A. The value is an int smaller than the size of the hash table, so the value is its own hash, so there is no hash table but if there was it would be O(1) and still be inefficient. B. You have to calculate the hash, so the order is O(n) for the size of the data being looked up. The lookup might be O(1) after you do O(n) work, but that still comes out to O(n) in my eyes. And unless you have a perfect hash or a large hash table there are probably several items per bucket so it devolves into a small linear search at some point anyway. I think hash tables are awesome, but I do not get the O(1) designation unless it is just supposed to be theoretical.

    Read the article

  • Sort vector<int>(n) in O(n) time using O(m) space?

    - by Adam
    I have a vector<unsigned int> vec of size n. Each element in vec is in the range [0, m], no duplicates, and I want to sort vec. Is it possible to do better than O(n log n) time if you're allowed to use O(m) space? In the average case m is much larger than n, in the worst case m == n. Ideally I want something O(n). I get the feeling that there's a bucket sort-ish way to do this: unsigned int aux[m]; aux[vec[i]] = i; Somehow extract the permutation and permute vec. I'm stuck on how to do 3. In my application m is on the order of 16k. However this sort is in the inner loops and accounts for a significant portion of my runtime.

    Read the article

  • tight (T) bound

    - by tomwu
    Can someone explain this to me? Such as this: Given a function: for k = 1 to lg(n) for j = 1 to n x=x+1 How would I analyze the tight (T) bound?

    Read the article

  • Why are bugs responsible for big deficiencies in functionality given such low priority?

    - by keepitsimpleengineer
    Well, first of all, change is inevitable and mostly good. Furthermore attempts at simplifying the User Interface such as Gnome 3, Unity to make Linux more inclusive hold much promise, even though they adversely affect my style of working. Additionally, though now retired, I have worked with computers for 47 years, and though I do nothing serious for others now, I still do heavy duty things. 10.04 LTS is my big workstation, and I had three 10.10 systems for Mythtv, and one of which is further adapted for video & related. The Mythtv were 10.10 because of a dormant bug regarding installing to 10.04. My work habits consistently use dual monitors and compiz cube and 3D windows with the computing horsepower to support them. The dual monitors with separate X screens has been not been functional since 11.04, and cube/3D windows not functional in Unity, and with diminished functionality Gnome. There is a bug filed (after upgrade to 12.04 amd64 Gnome Classic not properly draw second screen) I have mitigated the situation some by switching to Xubuntu and eschewing Unity. The question that comes to mind is why this bug is not given more attention in that it nearly cuts functionality in half for more competent workstations. Sample workspace... Please know that I appreciate all the hard work, dedication require to pull off something as big as Ubuntu et al.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >