Search Results

Search found 1933 results on 78 pages for 'genetic algorithms'.

Page 16/78 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • questions on a particular algorithm

    - by paul smith
    Upon searching for a fast primr algorithm, I stumbled upon this: public static boolean isP(long n) { if (n==2 || n==3) return true; if ((n&0x1)==0 || n%3==0 || n<2) return false; long root=(long)Math.sqrt(n)+1L; // we check just numbers of the form 6*k+1 and 6*k-1 for (long k=6;k<=root;k+=6) { if (n%(k-1)==0) return false; if (n%(k+1)==0) return false; } return true; } My questions are: Why is long being used everywhere instead of int? Because with a long type the argument could be much larger than Integer.MAX thus making the method more flexible? In the second 'if', is n&0x1 the same as n%2? If so why didn't the author just use n%2? To me it's more readable. The line that sets the 'root' variable, why add the 1L? What is the run-time complexity? Is it O(sqrt(n/6)) or O(sqrt(n)/6)? Or would we just say O(n)?

    Read the article

  • Why did visual programming never take off and what future paradigms might change that?

    - by Rego
    As the number of "visual" OS's such as Android, iOS and the promised Windows 8 are becoming more popular, it does not seem to me that we programmers have new ways to code using these new technologies, due to a possible lack in new visual programming languages paradigms. I've seen several discussions about incompatibilities between the current coding development environment, and the new OS approaches from Windows 8, Android and other tablets OS's. I mean, today if we have a new tablet, it's almost a requirement for coding, to have, for instance, an external keyboard (due it seems to me it's very difficult to program using the touch screen), exactly because the coding assistance is not conceived to "write" thousands of lines of code. So, how advanced should be the "new" visual programming languages paradigms? Which characteristics these new paradigms would be required?

    Read the article

  • What is the Big-O time complexity of this algorithm

    - by grebwerd
    I was wondering what the run time of this small program would be? #include <stdio.h> int main(int argc, char* argv[]) { int i; int j; int inputSize; int sum = 0; if(argc == 1) inputSize = 16; else inputSize = atoi(argv[i]); for(i = 1; i <= inputSize; i++){ for(j = i; j < inputSize; j *=2 ){ printf("The value of sum is %d\n",++sum); } } } n S floor(log n - log (n-i)) = ? i =1 and that each summation would be the floor value between log(n) - log(n-i). Would the run time be n log n?

    Read the article

  • How to calculate Sin function quicker and more precisely?

    - by user1297181
    I want to calculate y(n)=32677Sin(45/1024•n), where y is an integer and n ranges from 0 to 2048. How can I make this process quicker and more precisely? Now I want to show you a reference answer: Since Sin(a+b)=Sin(a)Cos(b)+Cos(a)Sin(b) And Cos(a+b)=Cos(a)Cos(b)-Sin(a)Cos(b). So I can store Sin(45/1024•1) andCos(45/1024•1) ` only.Then use this formula: Sin(45/1024•2)=Sin(45/1024•1+45/1024•1), Cos(45/1024•2)=Cos(45/1024•1+45/1024•1), .... `Sin(45/1024•n)=Sin(45/1024•(n-1)+45/1024•1) `, `Cos(45/1024•n)=Cos(45/1024•(n-1)+45/1024•1) `, This way maybe quicker without storing large array.

    Read the article

  • algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please explain the solution or suggest any new approach for better performance soon. Thanks in advance.

    Read the article

  • Algorithm for rating books: Relative perception

    - by suneet
    So I am developing this application for rating books (think like IMDB for books) using relational database. Problem statement : Let's say book "A" deserves 8.5 in absolute sense. In case if A is the best book I have ever seen, I'll most probably rate it 9.5 whereas for someone else, it might be just an average book, so he/they will rate it less (say around 8). Let's assume 4 such guys rate it 8. If there are 10 guys who are like me (who haven't ever read great literature) and they all rate it 9.5-10. This will effectively make it's cumulative rating greater than 9 (9.5*10 + 8*4) / 14 = 9.1 whereas we needed the result to be 8.5 ... How can I take care of(normalize) this bias due to incorrect perception of individuals. MyProposedSolution : Here's one of the ways how I think it could be solved. We can have a variable Lit_coefficient which tells us how much knowledge a user has about literature. If I rate "A"(the book) 9.5 and person "X" rates it 8, then he must have read books much better than "A" and thus his Lit_coefficient should be higher. And then we can normalize the ratings according to the Lit_coefficient of user. Could there be a better algorithm/solution for the same?

    Read the article

  • What is an appropriate language for expressing initial stages of algorithm refinement?

    - by hydroparadise
    First, this is not a homework assignment, but you can treat it as such ;). I found the following question in the published paper The Camel Has Two Humps. I was not a CS major going to college (I majored in MIS/Management), but I have a job where I find myself coding quite often. For a non-trivial programming problem, which one of the following is an appropriate language for expressing the initial stages of algorithm refinement? (a) A high-level programming language. (b) English. (c) Byte code. (d) The native machine code for the processor on which the program will run. (e) Structured English (pseudocode). What I do know is that you usually want to start your design implementation by writing down pseuducode and then moving/writing in the desired technology (because we all do that, right?) But I never thought about it in terms of refinement. I mean, if you were the original designer, then you might have access to the original pseudocode. But realisticly, when I have to maintain/refactor/refine somebody elses code, I just keep trucking with the language it currently resides in. Anybody have a definitive answer to this? As a side note, I did a quick scan of the paper as I havn't read every single detail. It presents various score statistics, can't find where the answers are with the paper.

    Read the article

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • How can I permute pairs across a set?

    - by sila
    I am writing a bet settling app in C# and WinForms. I have 6 selections, 4 of them have won. I know that using the following formula from Excel: =FACT(selections)/(FACT(selections-doubles))/FACT(doubles) This is coded into my app and working well: I can work out how many possible doubles (e.g., AB, AC, AD, AE, BC, BD, BE, etc.) need to be resolved. But what I can't figure out is how to do the actual calculation. How can I efficiently code it so that every combination of A, B, C, and D has been calculated? All my efforts thus far on paper have proved to be ugly and verbose: is there an elegant solution to this problem?

    Read the article

  • If your algorithm is correct, does it matter how long it took you to write it?

    - by John Isaacks
    I recently found out that Facebook had a programming challenge that if completed correctly you automatically get a phone interview. There is a sample challenge that asks you to write an algorithm that can solve a Tower of Hanoi type problem. Given a number of pegs and discs, an initial and final configuration; Your algorithm must determine the fewest steps possible to get to the final configuration and output the steps. This sample challenge gives you a 45 minute time limit but allows you to still test your code to see if it passes once your time limit expires. I did not know of any cute math solution that could solve it, and I didn't want to look for one since I think that would be cheating. So I tried to solve the challenge the best I could on my own. I was able to make an algorithm that worked and passed. However, it took me over 4 hours to make, much longer than the 45 minute requirement. Since it took me so much longer than the allotted time, I have not attempted the actual challenge. This got me wondering though, in reality does it really matter that it took me that long? I mean is this a sign that I will not be able to get a job at a place like this (not just Facebook, but Google, Fog Creek, etc.) and need to lower my aspirations, or does the fact that I actually passed on my first attempt even though it took too long be taken as good?

    Read the article

  • Automated tests for differencing algorithm

    - by Matthew Rodatus
    We are designing a differencing algorithm (based on Longest Common Subsequence) that compares a source text and a modified copy to extract the new content (i.e. content that is only in the modified copy). I'm currently compiling a library of test case data. We need to be able to run automated tests that verify the test cases, but we don't want to verify strict accuracy. Given the heuristic nature of our algorithm, we need our test pass/failures to be fuzzy. We want to specify a threshold of overlap between the desired result and the actual result (i.e. the content that is extracted). I have a few sketches in my mind as to how to solve this, but has anyone done this before? Does anyone have guidance or ideas about how to do this effectively?

    Read the article

  • Name of the Countdown Numbers round problem - and algorithmic solutions?

    - by Dai
    For the non-Brits in the audience, there's a segment of a daytime game-show where contestants have a set of 6 numbers and a randomly generated target number. They have to reach the target number using any (but not necessarily all) of the 6 numbers using only arithmetic operators. All calculations must result in positive integers. An example: Youtube: Countdown - The Most Extraordinary Numbers Game Ever? A detailed description is given on Wikipedia: Countdown (Game Show) For example: The contentant selects 6 numbers - two large (possibilities include 25, 50, 75, 100) and four small (numbers 1 .. 10, each included twice in the pool). The numbers picked are 75, 50, 2, 3, 8, 7 are given with a target number of 812. One attempt is (75 + 50 - 8) * 7 - (3 * 2) = 813 (This scores 7 points for a solution within 5 of the target) An exact answer would be (50 + 8) * 7 * 2 = 812 (This would have scored 10 points exactly matching the target). Obviously this problem has existed before the advent of TV, but the Wikipedia article doesn't give it a name. I've also saw this game at a primary school I attended where the game was called "Crypto" as an inter-class competition - but searching for it now reveals nothing. I took part in it a few times and my dad wrote an Excel spreadsheet that attempted to brute-force the problem, I don't remember how it worked (only that it didn't work, what with Excel's 65535 row limit), but surely there must be an algorithmic solution for the problem. Maybe there's a solution that works the way human cognition does (e.g. in-parallel to find numbers 'close enough', then taking candidates and performing 'smaller' operations).

    Read the article

  • Algorithm for flattening overlapping ranges

    - by Joseph
    I am looking for a nice way of flattening (splitting) a list of potentially-overlapping numeric ranges. The problem is very similar to that of this question: Fastest way to split overlapping date ranges, and many others. However, the ranges are not only integers, and I am looking for a decent algorithm that can be easily implemented in Javascript or Python, etc. Example Data: Example Solution: Apologies if this is a duplicate, but I am yet to find a solution.

    Read the article

  • Looking for a non-cryptographic hash function that returns a single character

    - by makerofthings7
    Suppose I have a dictionary of ASCII words stored in uppercase. I also want to save those words into separate files so that the total word count of each file is approximately the same. By simply looking at the word I need to know which file it should be in (if it's there at all). Duplicate words should go into the same file and overwrite the last one. My first attempt at solving this problem is to use .NET's object.GetHashCode() function and .Trim() to get one of the "random" characters that pop up. I asked a similar question here If I only use one character of object.GetHashCode() I would get a hash code character of A..Z or 0..9. However saving the result of GetHashCode to disk is a no-no so I need a substitute. Question: What algorithm (or subset of an algorithm) is appropriate for pigeonholing strings into a single character or range of characters (Like hex 0..F offers 16 chars)? Real world usage: I'll use this answer to modify the Partition key used in Azure Table storage as described here

    Read the article

  • Number Game Algorithm

    - by 7Aces
    Problem Link - http://www.iarcs.org.in/inoi/2011/zco2011/zco2011-1b.php The task is to find the maximum score you can get in the game. Such problems, based on games, where you have to simulate, predict the result, or obtain the maximum possible score always seem to puzzle me. I can do it with recursion by considering two cases - first number picked or last number picked, each of which again branches into two states similarly, and so on... which finally can yield the max possible result. But it's a very time-inefficient approach, since time increases exponentially, due to the large test cases. What is the most pragmatic approach to the problem, and to such problems in general?

    Read the article

  • Performing a Depth First Search iteratively using async/parallel processing?

    - by Prabhu
    Here is a method that does a DFS search and returns a list of all items given a top level item id. How could I modify this to take advantage of parallel processing? Currently, the call to get the sub items is made one by one for each item in the stack. It would be nice if I could get the sub items for multiple items in the stack at the same time, and populate my return list faster. How could I do this (either using async/await or TPL, or anything else) in a thread safe manner? private async Task<IList<Item>> GetItemsAsync(string topItemId) { var items = new List<Item>(); var topItem = await GetItemAsync(topItemId); Stack<Item> stack = new Stack<Item>(); stack.Push(topItem); while (stack.Count > 0) { var item = stack.Pop(); items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } } return items; } I was thinking of something along these lines, but it's not coming together: var tasks = stack.Select(async item => { items.Add(item); var subItems = await GetSubItemsAsync(item.SubId); foreach (var subItem in subItems) { stack.Push(subItem); } }).ToList(); if (tasks.Any()) await Task.WhenAll(tasks); The language I'm using is C#.

    Read the article

  • Amazon Kindle - Whispersync implementation?

    - by Bala
    For those who are not aware of Kindle's whispersync, here is how it works (from amazon.com): "...Whispersync synchronizes the bookmarks and furthest page read among devices registered to the same account. Whispersync is on by default to ensure a seamless reading experience for a book read across multiple Kindles." Can anyone give some details on how the Whispersync feature is implemented in Kindle and in the Backend of Amazon? I am guessing this implementation involves a very simple hashmap for each user account. Each hashmap maps Books with satellite information about the book. Satellite information contains bookmarks, furthest page read, device on which it was read, etc.. Thanks!

    Read the article

  • Technique to Solve Hard Programming logic

    - by Paresh Mayani
    I have heard about many techniques which are used by developer/software manager to solve hard programming logic or to create flow of an application and this flow will be implemented by developers to create an actual application. Some of the technique which i know, are: Flowchart Screen-Layout Data Flow Diagram E-R Diagram Algorithm of every programs I'd like to know about two facts: (1) Are there any techniques other than this ? (2) Which one is the most suitable to solve hard programming logic and process of application creation?

    Read the article

  • Tail-recursive implementation of take-while

    - by Giorgio
    I am trying to write a tail-recursive implementation of the function take-while in Scheme (but this exercise can be done in another language as well). My first attempt was (define (take-while p xs) (if (or (null? xs) (not (p (car xs)))) '() (cons (car xs) (take-while p (cdr xs))))) which works correctly but is not tail-recursive. My next attempt was (define (take-while-tr p xs) (let loop ((acc '()) (ys xs)) (if (or (null? ys) (not (p (car ys)))) (reverse acc) (loop (cons (car ys) acc) (cdr ys))))) which is tail recursive but needs a call to reverse as a last step in order to return the result list in the proper order. I cannot come up with a solution that is tail-recursive, does not use reverse, only uses lists as data structure (using a functional data structure like a Haskell's sequence which allows to append elements is not an option), has complexity linear in the size of the prefix, or at least does not have quadratic complexity (thanks to delnan for pointing this out). Is there an alternative solution satisfying all the properties above? My intuition tells me that it is impossible to accumulate the prefix of a list in a tail-recursive fashion while maintaining the original order between the elements (i.e. without the need of using reverse to adjust the result) but I am not able to prove this. Note The solution using reverse satisfies conditions 1, 3, 4.

    Read the article

  • Sorting Algorithm : output

    - by Aaditya
    I faced this problem on a website and I quite can't understand the output, please help me understand it :- Bogosort, is a dumb algorithm which shuffles the sequence randomly until it is sorted. But here we have tweaked it a little, so that if after the last shuffle several first elements end up in the right places we will fix them and don't shuffle those elements furthermore. We will do the same for the last elements if they are in the right places. For example, if the initial sequence is (3, 5, 1, 6, 4, 2) and after one shuffle we get (1, 2, 5, 4, 3, 6) we will keep 1, 2 and 6 and proceed with sorting (5, 4, 3) using the same algorithm. Calculate the expected amount of shuffles for the improved algorithm to sort the sequence of the first n natural numbers given that no elements are in the right places initially. Input: 2 6 10 Output: 2 1826/189 877318/35343 For each test case output the expected amount of shuffles needed for the improved algorithm to sort the sequence of first n natural numbers in the form of irreducible fractions. I just can't understand the output.

    Read the article

  • booth multiplication algorithm

    - by grassPro
    Is booth algorithm for multiplication only for multiplying 2 negative numbers (-3 * -4) or one positive and one negative number (-3 * 4) ? Whenever i multiply 2 positive numbers using booth algorithm i get a wrong result. example : 5 * 4 A = 101 000 0 // binary of 5 is 101 S = 011 000 0 // 2's complement of 5 is 011 P = 000 100 0 // binary of 4 is 100 x = 3 y = 3 m = 5 -m = 2's complement of m r = 4 After right shift of P by 1 bit 0 000 100 After right shift of P by 1 bit 0 000 010 P+S = 011 001 0 After right shift by 1 bit 0 011 001 Discarding the LSB 001100 But that comes out to be the binary of 12 . It should have been 20(010100)

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • How to manually detect deadlocks

    - by Dawson
    I understand the concepts of deadlock well enough, but when I'm given a problem like the one below I'm not sure how to go about solving it. I can draw a resource allocation graph, but I'm not sure how to solve it from there. Is there a better more formal way of solving this? Consider a system with five processes, P1 through P5, and five resources, R1 through R5. Resource ownership is as follows. • P1 holds R1 and wants R3 • P2 holds R2 and wants R1 • P3 holds R3 and wants R5 • P4 holds R5 and wants R2 • P5 holds R4 and wants R2 Is this system deadlocked? Justify your answer. If the system is deadlocked, list the involved processes.

    Read the article

  • Is there a word or description for this type of query?

    - by Nick
    We have the requirement to find a result in a collection of records based on a prioritised set of search criteria against a relational db (I'm talking indexed field matching here rather than text search). The way we are thinking about designing the query is to begin with a highly refined and specific set of criteria. If there are no results for this initial query we want to progressively reduce the criteria one by one in order of reducing priority, querying each time such a less specific set of criteria until we find a result we can accept. Alternatively, we have considered starting with a smaller set of criteria and increasing until we have reduced number of results down to the last set. What I would like to know is if an existing term to describe this type of query exists? So that we can look to model our own on existing patterns and use best practice.

    Read the article

  • Possible applications of algorithm devised for differentiating between structured vs random text

    - by rooznom
    I have written a program that can rapidly (within 5 sec on a 2GB RAM desktop, 2.33 Ghz CPU) differentiate between structured text (e.g english text) and random alphanumeric strings. It can also provide a probability score for the prediction. Are there any practical applications/uses of such a program. Note that the program is based on entropy models and does not have any dictionary comparisons in its workflow. Thanks in advance for your responses

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >