Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 128/300 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • How do make my encryption algorithm encrypt more than 128 bits?

    - by Ranhiru
    OK, now I have coded for an implementation of AES-128 :) It is working fine. It takes in 128 bits, encrypts and returns 128 bits So how do i enhance my function so that it can handle more than 128 bits? How do i make the encryption algorithm handle larger strings? Can the same algorithm be used to encrypt files? :) The function definition is public byte[] Cipher(byte[] input) { }

    Read the article

  • SSAS dimension source table changed - how to propagate changes to analysis server?

    - by Phil
    Hi, Sorry if the question isn't phrased very well but I'm new to SSAS and don't know the correct terms. I have changed the name of a table and its columns. I am using said table as a dimension for my cube, so now the cube won't process. Presumably I need to make updates in the analysis server to reflect changes to the source database? I have no idea where to start - any help gratefully received. Thanks Phil

    Read the article

  • is Microsoft LC random generator patented?

    - by user396672
    I need a very simple pseudo random generator (no any specific quality requirements) and I found Microsoft's variant of LCG algorithm used for rand() C runtime library function fit my needs (gcc's one seems too complex). I found the algorithm here: http://rosettacode.org/wiki/Linear_congruential_generator#C However, I worry the algorithm (including its "magic numbers" i.e coefficients) may by patented or restricted for use in some another way. Is it allowed to use this algorithm without any licence or patent restrictions or not? I can't use library rand() because I need my results to be exactly reproducible on different platforms

    Read the article

  • Why does adding Crossover to my Genetic Algorithm gives me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • Why does adding Crossover to my Genetic Algorithm give me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • Is there any algorithm that can solve ANY traditional sudoku puzzles, WITHOUT guessing (or similar techniques)?

    - by justin
    Is there any algorithm that solves ANY traditional sudoku puzzle, WITHOUT guessing? Here Guessing means trying an candidate and see how far it goes, if a contradiction is found with the guess, backtracking to the guessing step and try another candidate; when all candidates are exhausted without success, backtracking to the previous guessing step (if there is one; otherwise the puzzle proofs invalid.), etc. EDIT1: Thank you for your replies. traditional sudoku means 81-box sudoku, without any other constraints. Let us say the we know the solution is unique, is there any algorithm that can GUARANTEE to solve it without backtracking? Backtracking is a universal tool, I have nothing wrong with it but, using a universal tool to solve sudoku decreases the value and fun in deciphering (manually, or by computer) sudoku puzzles. How can a human being solve the so called "the hardest sudoku in the world", does he need to guess? I heard some researcher accidentally found that their algorithm for some data analysis can solve all sudoku. Is that true, do they have to guess too?

    Read the article

  • Overview of Microsoft SQL Server 2008 Upgrade Advisor

    - by Akshay Deep Lamba
    Problem Like most organizations, we are planning to upgrade our database server from SQL Server 2005 to SQL Server 2008. I would like to know is there an easy way to know in advance what kind of issues one may encounter when upgrading to a newer version of SQL Server? One way of doing this is to use the Microsoft SQL Server 2008 Upgrade Advisor to plan for upgrades from SQL Server 2000 or SQL Server 2005. In this tip we will take a look at how one can use the SQL Server 2008 Upgrade Advisor to identify potential issues before the upgrade. Solution SQL Server 2008 Upgrade Advisor is a free tool designed by Microsoft to identify potential issues before upgrading your environment to a newer version of SQL Server. Below are prerequisites which need to be installed before installing the Microsoft SQL Server 2008 Upgrade Advisor. Prerequisites for Microsoft SQL Server 2008 Upgrade Advisor .Net Framework 2.0 or a higher version Windows Installer 4.5 or a higher version Windows Server 2003 SP 1 or a higher version, Windows Server 2008, Windows XP SP2 or a higher version, Windows Vista Download SQL Server 2008 Upgrade Advisor You can download SQL Server 2008 Upgrade Advisor from the following link. Once you have successfully installed Upgrade Advisor follow the below steps to see how you can use this tool to identify potential issues before upgrading your environment. 1. Click Start -> Programs -> Microsoft SQL Server 2008 -> SQL Server 2008 Upgrade Advisor. 2. Click Launch Upgrade Advisor Analysis Wizard as highlighted below to open the wizard. 2. On the wizard welcome screen click Next to continue. 3. In SQL Server Components screen, enter the Server Name and click the Detect button to identify components which need to be analyzed and then click Next to continue with the wizard. 4. In Connection Parameters screen choose Instance Name, Authentication and then click Next to continue with the wizard. 5. In SQL Server Parameters wizard screen select the Databases which you want to analysis, trace files if any and SQL batch files if any.  Then click Next to continue with the wizard. 6. In Reporting Services Parameters screen you can specify the Reporting Server Instance name and then click next to continue with the wizard. 7. In Analysis Services Parameters screen you can specify an Analysis Server Instance name and then click Next to continue with the wizard. 8. In Confirm Upgrade Advisor Settings screen you will be able to see a quick summary of the options which you have selected so far. Click Run to start the analysis. 9. In Upgrade Advisor Progress screen you will be able to see the progress of the analysis. Basically, the upgrade advisor runs predefined rules which will help to identify potential issues that can affect your environment once you upgrade your server from a lower version of SQL Server to SQL Server 2008. 10. In the below snippet you can see that Upgrade Advisor has completed the analysis of SQL Server, Analysis Services and Reporting Services. To see the output click the Launch Report button at the bottom of the wizard screen. 11. In View Report screen you can see a summary of issues which can affect you once you upgrade. To learn more about each issue you can expand the issue and read the detailed description as shown in the below snippet.

    Read the article

  • Is there a sample set of web log data available for testing analysis against?

    - by Peter
    Sorry if this isn't strictly speaking a programming question, but I figure my best chance of success would be to ask here. I'm developing some web log file analysis algorithms, but to date I only have access to a fairly small amount of web log data to process. One algorithm I want to use makes some assumptions about 'the shape' of typical web log data, and so I'd like to test it against a larger 'exemplar' - perhaps the logs of a busy site with a good distribution of traffic from different sources etc. Is there a set of such data available somewhere? Thanks for any help.

    Read the article

  • Best neural network for certain type of pattern analysis?

    - by fred basset
    Hi All, I'm working on a system that will send telemetry data on machine operation back to a central server for analysis. One of the machine parameters we're measuring is motor current drawn vs time. After an operation is finished we plan to send back an array of currents vs time to the server. A successful operation would have a pattern like a trapezoid, problematic operations would have a pattern completely different, more like a large spike in values. Can anyone recommend a type of neural network that would be good at classifying these 1D vectors of current values into a pass/fail type output? Thanks, Fred

    Read the article

  • Can I implement the readers and writers algorithm in OpenMP by replacing counting semaphores with another feature?

    - by DeveloperDon
    After reading about OpenMP and not finding functions to support semaphores, I did an internet search for OpenMP and the readers and writers problem, but found no suitable matches. Is there a general method for replacing counting semaphores in OpenMP with something that it supports? Or is there just a gap in the environment where it does not permit things that are asymmetrical like the third readers and writers problem shown on the following page? http://en.wikipedia.org/wiki/Readers-writers_problem#The_third_readers-writers_problem

    Read the article

  • How can I approach creating an efficient algorithm for maximizing value with these specific constraints?

    - by sway
    I'm having trouble coming up with an approach that isn't n^2 for this problem. Here's a contrived, simplified version I've come up with: Let's say you're a company that needs 4 employees to launch in a new city, a manager, two salespeople, and a customer support rep, and you magically know how much impact every candidate will have and how much salary they require to take the job. Your table of potential employees looks something like this: Name Position Salary Impact Adam Smith Manager 60,000 11 Allison Brown Salesperson 40,000 9 Brad Stewart Manager 55,000 9 ...etc (thousands of records) What algorithmic approach can be taken to find the maximum "impact" while still filling all the positions and remaining under, say, a 200,000 budget? Thanks!

    Read the article

  • Algorithm for dynamically calculating a level based on experience points?

    - by George
    One of the struggles I've always had in game development is deciding how to implement experience points attributed to gaining a level. There doesn't seem to be a pattern to gaining a level in many of the games I've played, so I assume they have a static dictionary table which contains experience points vs. the level. e.g. Experience Level 0 1 100 2 175 3 280 4 800 5 ...There isn't a rhyme or reason why 280 points is equal to level 4, it just is. I'm not sure how those levels are decided, but it certainly wouldn't be dynamic. I've also thought about the possibility of exponential levels, as not to have to keep a separate lookup table, e.g. Experience Level 0 1 100 2 200 3 400 4 800 5 1600 6 3200 7 6400 8 ...but that seems like it would grow out of control rather quickly, as towards the upper levels, the enemies in the game would have to provide a whopping amount of experience to level -- and that would be to difficult to control. Leveling would become an impossible task. Does anyone have any pointers, or methods they use to decide how to level a character based on experience? I want to be fair in leveling and I want to stay ahead of the players as not to worry about constantly adding new experience/level lookups.

    Read the article

  • What algorithm(s) can be used to achieve reasonably good next word prediction?

    - by yati sagade
    What is a good way of implementing "next-word prediction"? For example, the user types "I am" and the system suggests "a" and "not" (or possibly others) as the next word. I am aware of a method that uses Markov Chains and some training text(obviously) to more or less achieve this. But I read somewhere that this method is very restrictive and applies to very simple cases. I understand basics of neural networks and genetic algorithms(though have never used them in a serious project) and maybe they could be of some help. I wonder if there are any algorithms that, given appropriate training text(e.g., newspaper articles, and the user's own typing) can come up with reasonably appropriate suggestions for the next word. If not (links to)algorithms, general high-level methods to attack this problem are welcome.

    Read the article

  • If I use locks, can my algorithm still be lock-free?

    - by Joe Pension
    A common definition of lock-free is that at least one process makes progress. 1 If I have a simple data structure such as a queue, protected by a lock, then one process can always make progress, as one process can acquire the lock, do what it wants, and release it. So does it meet the definition of lock-free? 1 See eg M. Herlihy, V. Luchangco, and M. Moir. Obstruction-free synchronization: Double-ended queues as an example. In Distributed Computing, 2003. "It is lock-free if it ensures only that some thread always makes progress".

    Read the article

  • Algorithm for determining grid based on variably sized "blocks"?

    - by Lite Byte
    I'm trying to convert a set of "blocks" in to a grid-like layout. The blocks have a width of either 25%, 33%, 50%, 66%, or 75% of their container and each row of the grid should try to fit as many blocks as possible, up to a total width of 100%. I've discovered that trying to do this while leaving no remaining blocks in the original set is very hard. Eventually, I think my solution will be to upgrade/downgrade various block sizes (based on their priority or something) so they all fit in to a row. Either case, before I do that, I thought I'd check if someone has some code (or a paper) demonstrating a solution to this problem already? And bonus points if the solution incorporates varying block heights in to its calculations :) Thanks!

    Read the article

  • What is a good algorithm to distribute items with specific requirements?

    - by user66160
    I have to programmatically distribute a set of items to some entities, but there are rules both on the items and on the entities like so: Item one: 100 units, only entities from Foo Item two: 200 units, no restrictions Item three: 100 units, only entities that have Bar Entity one: Only items that have Baz Entity one hundred: No items that have Fubar I only need to be pointed in the right direction, I'll research and learn the suggested methods.

    Read the article

  • How to optimize Dijkstra algorithm for a single shortest path between 2 nodes?

    - by Nazgulled
    Hi, I was trying to understand this implementation in C of the Dijkstra algorithm and at the same time modify it so that only the shortest path between 2 specific nodes (source and destination) is found. However, I don't know exactly what do to. The way I see it, there's nothing much to do, I can't seem to change d[] or prev[] cause those arrays aggregate some important data for the shortest path calculation. The only thing I can think of is stopping the algorithm when the path is found, that is, break the cycle when mini = destination when it's being marked as visited. Is there anything else I could do to make it better or is that enough? P.S: I just noticed that the for loops start at 1 until <=, why can't it start at 0 and go until <?

    Read the article

  • What is the algorithm used by the memberwise equality test in .NET structs?

    - by Damian Powell
    What is the algorithm used by the memberwise equality test in .NET structs? I would like to know this so that I can use it as the basis for my own algorithm. I am trying to write a recursive memberwise equality test for arbitrary objects (in C#) for testing the logical equality of DTOs. This is considerably easier if the DTOs are structs (since ValueType.Equals does mostly the right thing) but that is not always appropriate. I would also like to override comparison of any IEnumerable objects (but not strings!) so that their contents are compared rather than their properties. This has proven to be harder than I would expect. Any hints will be greatly appreciated. I'll accept the answer that proves most useful or supplies a link to the most useful information. Thanks.

    Read the article

  • Algorithm to generate all possible permutations of a list?

    - by LLer
    Say I have a list of n elements, I know there are n! possible ways to order these elements. What is an algorithm to generate all possible orderings of this list? Example, I have list [a, b, c]. The algorithm would return [[a, b, c], [a, c, b,], [b, a, c], [b, c, a], [c, a, b], [c, b, a]]. I'm reading this here http://en.wikipedia.org/wiki/Permutation#Algorithms_to_generate_permutations But Wikipedia has never been good at explaining. I don't understand much of it.

    Read the article

  • Bandpass filter of FFT applied image. (Like ImageJ bandpass filter algorithm)

    - by maximus
    There is a good function that I need, which is implemented in Java program: ImageJ I need to understand the algorithm used there. The function has several parameters: link text And before using FFT it converts image to a special one: The Bandpass Filter uses a special algorithm to reduce edge artifacts (before the Fourier transform, the image is extended in size by attaching mirrored copies of image parts outside the original image, thus no jumps occur at the edges) Can you tell me more about this special transform? Actually tiling mirrored image. I am writing on C++ and wish to rewrite that part of the program on C++.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >