Search Results

Search found 5104 results on 205 pages for 'evolutionary algorithm'.

Page 168/205 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Architecture of a secure application that encrypts data in the database.

    - by Przemyslaw Rózycki
    I need to design an application that protects some data in a database against root attack. It means, that even if the aggressor takes control over the machine where data is stored or machine with the application server, he can't read some business critical data from the database. This is a customer's requirement. I'm going to encrypt data with some assymetric algorithm and I need some good ideas, where to store private keys, so that data is secure as well as the application usability was quite comfortable? We can assume, for simplicity, that only one key pair is used.

    Read the article

  • Need to get pixel averages of a vector sitting on a bitmap...

    - by user346511
    I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? I'm not even clear what this type of math is called. I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif

    Read the article

  • PHP: Safe way to store decryptable passwords

    - by Jammer
    I'm making an application in PHP and there is a requirement that it must be possible to decrypt the passwords in order to avoid problems in the future with switching user database to different system. What encryption/decryption algorithm would you suggest? Is it good idea to just store the encrypted value and then compare the future authentication attempts to that value? Are the passwords still as safe as MD5/SHA1 when the private key is not available to the attacker (Hidden in USB drive for example)? I should still use salting, right? What encryption libraries should I use for PHP?

    Read the article

  • UML and Documenting Simple Diagrams

    - by Jason
    As part of a rewrite of an old Java application into C#, I'm writing an actual Software Design Specification. A problem I run into is when a method is too simple to bother with a Sequence Diagram (it doesn't interact with other objects). As an example, I have a simple POJO called Item, containing the following method: public String getCategoryKey() { StringBuffer value = new StringBuffer("s-"); value.append(this.getModelID()); value.append("-c"); return value; } The purpose and the algorithm for the method needs to be documented. However, a sequence diagram is overkill. How would others document it? (I take no credit/blame for the given method, it's very old code and the author "forgot" to put their name in the Javadoc).

    Read the article

  • Whats the deal with python?

    - by gmatt
    My interests in programming lie mainly in algorithms, and lately I have seen many reputable researchers write a lot of their code in python. How easy and convenient is python for scientific computing? Does it have a library of algorithms that compares to matlab's? Is Python a scripting language or does it compile? Is it a great language for prototyping an algorithm? How long would it take me to learn enough of it to be productive provided I know C well and OO programming somewhat? Is it OO based? Sorry for the condensed format of questions, but I'm very curious and was hoping a more experienced programmer could help me out.

    Read the article

  • k-means clustering in R on very large, sparse matrix?

    - by movingabout
    Hello, I am trying to do some k-means clustering on a very large matrix. The matrix is approximately 500000 rows x 4000 cols yet very sparse (only a couple of "1" values per row). The whole thing does not fit into memory, so I converted it into a sparse ARFF file. But R obviously can't read the sparse ARFF file format. I also have the data as a plain CSV file. Is there any package available in R for loading such sparse matrices efficiently? I'd then use the regular k-means algorithm from the cluster package to proceed. Many thanks

    Read the article

  • Extracting data points from a matrix and saving them in different matrixes in MATLAB

    - by Hossein
    Hi, I have a 2D Matrix consisting of some coordinates as below(example): Data(X,Y): 45.987543423,5.35000964 52.987544223,5,98765234 Also I have an array consisting of some integers =0 , for example: Cluster(M) 2,0,3,1 each of these numbers in this array corresponds with a row of my 2D Matrix above.For example, it says that row one(coordinate) in the Data Matirx belongs to the cluster 2,second row belongs to cluster 0 and so on. Now I want to have each of the datapoint of each cluster in a separate matrix, for example I want to save datapoints belonging to cluster 1 in a separate matrix, cluster 2 in a separate matrix and so on,.... I can do them manually, but the problem is this has to be an automatic extraction. which means that the number of clusters(range of the numbers in the cluster array varies in each run) so I have to have a general algorithm that does this extraction for me. Can someone help me please? thanks

    Read the article

  • Enumerating all hamiltonian paths from start to end vertex in grid graph

    - by Eric
    Hello, I'm trying to count the number of Hamiltonian paths from a specified start vertex that end at another specified vertex in a grid graph. Right now I have a solution that uses backtracking recursion but is incredibly slow in practice (e.g. O(n!) / 3 hours for 7x7). I've tried a couple of speedup techniques such as maintaining a list of reachable nodes, making sure the end node is still reachable, and checking for isolated nodes, but all of these slowed my solution down. I know that the problem is NP-complete, but it seems like some reasonable speedups should be achievable in the grid structure. Since I'm trying to count all the paths, I'm sure that the search must be exhaustive, but I'm having trouble figuring out how to prune out paths that aren't promising. Does anyone have some suggestions for speeding the search up? Or an alternate search algorithm?

    Read the article

  • Rearrange a python list into n lists, by column

    - by Ben R
    Trying to solve this at this hour has gotten my mind into a tail-spin: I want to rearrange a list l into a list of n lists, where n is the number of columns. e.g., l = [1,2,3,4,5,6,7,8] n = 5 ==> [[1,6][2,7][3,8][4][5]] another example: l = [1,2,3,4,5,6,7,8,9,10] n = 4 ==> [[1,5,9],[2,6,10],[3,7][4,8] Can someone please help me out with an algorithm? Feel free to use any python awesomeness that's available; I'm sure theres some cool mechanism that's a good fit for this, i just can't think of it.

    Read the article

  • iteration on numbers with no 2 same digits

    - by rahmivolkan
    I dont know if it is asked (I couldn't find any). I want to iterate on this kind of numbers implemented on array; int a[10]; int i = 0; for( ; i < 10; i++ ) a[i] = i+1; now the array has "1 2 3 4 5 6 7 8 9 10" and I want to get "1 2 3 4 5 6 7 8 10 9" and then "1 2 3 4 5 6 7 9 8 10" "1 2 3 4 5 6 7 9 10 8" . . . . I tried to get an algorithm but I couldn't figure it out. Is there an easy way to implement "next" iterator for this kind of problems? Thanks in advance

    Read the article

  • Selecting item from set given distribution

    - by JH
    I have a set of X items such as {blower, mower, stove} and each item has a certain percentage of times it should be selected from the overall set {blower=25%,mower=25%,stove=75%} along with a certain distribution that these items should follow (blower should be selected more at the beginning of selection and stove more at the end). We are given a number of objects to be overall selected (ie 100) and a overall time to do this in (say 100 seconds). I was thinking of using a roulette wheel algorithm where the weights on the wheel are affected by the current distribution as a function of the elapsed time (and the allowed duration) so that simple functions could be used to determine the weight. Are there any common approaches to problems like this that anyone is aware of? Currently i have programmed something similar to this in java using functions such as x^2 (with correct normalization for the weights) to ensure that a good distribution occurs. Other suggestions or common practices would be welcome :-)

    Read the article

  • Search pattern in string using regex in obj-c

    - by manileo86
    I'm working on a string pattern match algorithm. I use NSRegularExpression for finding the matches. For ex: I've to find all words starting with '#' in a string.. Currently I use the following regex function: static NSRegularExpression *_searchTagRegularExpression; static inline NSRegularExpression * SearchTagRegularExpression() { if (!_searchTagRegularExpression) { _searchTagRegularExpression = [[NSRegularExpression alloc] initWithPattern:@"(?<!\\w)#([\\w\\._-]+)? options:NSRegularExpressionCaseInsensitive error:nil]; } return _searchTagRegularExpression; } and I use it as below: NSRegularExpression *regexp = SearchTagRegularExpression(); [regexp enumerateMatchesInString:searchString options:0 range:stringRange usingBlock:^(NSTextCheckingResult *result, NSMatchingFlags flags, BOOL *stop) { // comes here for every match with range }]; This works properly. But i just want to know if this is the best way. suggest if there's any better alternative...

    Read the article

  • R: how to make a unique set of names from a vector of strings?

    - by Mike Dewar
    Hi, I have a vector of strings. Check out my vector, it's awesome: > awesome [1] "a" "b" "c" "d" "d" "e" "f" "f" I'd like to make a new vector that is the same length as awesome but where, if necessary, the strings have been uniqueified. For example, a valid output of my desired function would be > awesome.uniqueified [1] "a" "b" "c" "d.1" "d.2" "e" "f.1" "f.2" Is there an easy, R-thonic and beautiful way to do this? I should say my list in real life (it's not called awesome) contains 25000ish mircoarray probeset identifiers. I'm always nervous when I embark on writing little generic functions (which I'm sure I could do) as I'm sure some R guru has come across this problem in the past, nailed it with some incredible algorithm that doesn't even have to store more than half an element in the vector. I'm just not sure what they might have called it. Probably not uniqueify.

    Read the article

  • The easiest way to draw an image?

    - by Benno
    Assume you want to read an image file in a common file format from the hard drive, change the color of one pixel, and display the resulting image to the screen, in C++. Which (open-source) libraries would you recommend to accomplish the above with the least amount of code? Alternatively, which libraries would do the above in the most elegant way possible? A bit of background: I have been reading a lot of computer graphics literature recently, and there are lots of relatively easy, pixel-based algorithms which I'd like to implement. However, while the algorithm itself would usually be straightforward to implement, the necessary amount of frame-work to manipulate an image on a per-pixel basis and display the result stopped me from doing it.

    Read the article

  • Serial numbers generation without user data

    - by Sphynx
    This is a followup to this question. The accepted answer is generally sufficient, but requires user to supply personal information (e.g. name) for generating the key. I'm wondering if it's possible to generate different keys based on a common seed, in a way that program would be able to validate if those keys belong to particular product, but without making this process obvious to the end user. I mean it could be a hash of product ID plus some random sequence of characters, but that would allow user to guess potential new keys. There should be some sort of algorithm difficult to guess.

    Read the article

  • PHP 2-way encryption: I need to store passwords that can be retrieved

    - by gAMBOOKa
    I am creating an application that will store passwords, which the user can retrieve and see. The passwords are for a hardware device, so checking against hashes are out of the question. What I need to know is: How do I encrypt and decrypt a password in PHP? What is the safest algorithm to encrypt the passwords with? Where do I store the private key? Instead of storing the private key, is it a good idea to require users to enter the private key any time they need a password decrypted? (Users of this application can be trusted) In what ways can the password be stolen and decrypted? What do I need to be aware of?

    Read the article

  • 0.20.2 API hadoop version with java 5

    - by abdeslam
    I have started a maven project trying to implement the MapReduce algorithm in java 1.5.0_14. I have chosen the 0.20.2 API hadoop version. In the pom.xml i'm using thus the following dependency: < dependency < groupId>org.apache.hadoop< /groupId> < artifactId>hadoop-core< /artifactId> < version>0.20.2< /version> < /dependency But when I'm using an import to the org.apache.hadoop classes, I get the following error: bad class file: ${HOME_DIR}\repository\org\apache\hadoop\hadoop-core\0.20.2\hadoop-core-0.20.2.jar(org/apache/hadoop/fs/Path.class) class file has wrong version 50.0, should be 49.0. Does someone know how can I solve this issue. Thanks.

    Read the article

  • Run java with highest security setting

    - by Ankiov Spetsnaz
    I'm currently writing an in house coding challenge web application and I am wondering if there is any other security precaution I would need to have other than below java option at runtime. java -Djava.security.manager=default Basically, challenges would be more of single threaded math and algorithm focused. So I would need to enable basic data structure objects and disable any file, sockets, threading or any thing that might be not so important. Based on my quick search turning on security manager as above seems to be a solution but since this is a security related I would like to be sure before it goes alive. Is there anything else I could do more?

    Read the article

  • Alternative to latex / a way to typeset good looking documents from Java to PDF

    - by drasto
    I'm working on application in Java that will maintain database of song lyrics in plain text and print out some songbooks/chordbooks(that is create PDF file from selected songs). I was planing that the Java application will generate source code for pdflatex and after compiling this source user will get PDF file. Lately I've run into a lot of problems because of latex limitation: fixed memory size (some pictures will also be drawn to PDF) - error when exceeded, no way to query end of line or and of page dynamically, it's very hard to override latex placement algorithm in a complex way,... see also some my other questions regarding latex. I come to conclusion that latex is not good option for automated PDF generation. So I need replacement. I need to be able to typeset: Chords over lyrics when the lyrics are in variable char width so I need to be able to measure text width Chord diagrams that means I'll have to draw quite complex pictures Each song on separate double page Different fonts etc. Thanks for all answers

    Read the article

  • Java long task - Did it stop writing to file?

    - by rockit
    I am writing a lot of data to a file, and while keeping my eye on the file it eventually stopped growing in size. Essentially my task is getting information from a database, and printing out all non-unique values in column A. Since there are many rows to the database table, and the database table is across my network, this is taking days to complete. Thus I'm concerned that since the file isn't growing, that it isn't actually writing to the file anymore. Which is odd, I have no "catch"'s in my code, so if there was a problem writing to file, wouldn't it have thrown an error?! Should I let the task complete (estimate 2-3 days from today), or is there something else that I don't know going on here making my application not write to the file?! my algorithm goes something like this Declare file Create new file Open file for writing get database connection get resultset from database for each row in the resultset - write column "A" to file - if row# % 100000 then write to screen "completed " + row# + " rows" when no more rows exist close file write to screen - "completed"

    Read the article

  • How to test Text Search accuracy and efficiency?

    - by DEN
    I have created a web application. One of the feature is text search which perform the boolean operator ( NOT, AND, OR) as well. However, I have no idea on calculating the search's accuracy and efficiency. For example: 1 . Probe identification system for a measurement instrument 2 . Pulse-based impedance measurement instrument 3 . Millimeter with filtered measurement mode when the user key in will return the result as below input :measurement instrument Result: 1,2 input : measurement OR instrument NOT milimeter Result: 1,2,3 so, i have no idea on what issue and what algorithm to calculate on the accuracy and efficiency of the text search.. anyone have any idea on that?

    Read the article

  • Rounding a positive number to a power of another number

    - by Sagekilla
    I'm trying to round a number to the next smallest power of another number. The number I'm trying to round is always positive. I'm not particular on which direction it rounds, but I prefer downwards if possible. I would like to be able to round towards arbitrary bases, but the ones I'm most concerned with at the moment is base 2 and fractional powers of 2 like 2^(1/2), 2^(1/4), and so forth. Here's my current algorithm for base 2. The log2 I multiply by is actually the inverse of log2: double roundBaseTwo(double x) { return 1.0 / (1 << (int)((log(x) * log2)) } Any help would be appreciated!

    Read the article

  • how to diff / align Python lists using arbitrary matching function?

    - by James Tauber
    I'd like to align two lists in a similar way to what difflib.Differ would do except I want to be able to define a match function for comparing items, not just use string equality, and preferably a match function that can return a number between 0.0 and 1.0, not just a boolean. So, for example, say I had the two lists: L1 = [('A', 1), ('B', 3), ('C', 7)] L2 = ['A', 'b', 'C'] and I want to be able to write a match function like this: def match(item1, item2): if item1[0] == item2: return 1.0 elif item1[0].lower() == item2.lower(): return 0.5 else: return 0 and then do: d = Differ(match_func=match) d.compare(L1, L2) and have it diff using the match function. Like difflib, I'd rather the algorithm gave more intuitive Ratcliff-Obershelp type results rather than a purely minimal Levenshtein distance.

    Read the article

  • Generating a beveled edge for a 2D polygon

    - by Metaphile
    I'm trying to programmatically generate beveled edges for geometric polygons. For example, given an array of 4 vertices defining a square, I want to generate something like this. But computing the vertices of the inner shape is baffling me. Simply creating a copy of the original shape and then scaling it down will not produce the desired result most of the time. My algorithm so far involves analyzing adjacent edges (triples of vertices; e.g., the bottom-left, top-left, and top-right vertices of a square). From there, I need to find the angle between them, and then create a vertex somewhere along that angle, depending on how deep I want the bevel to be. And because I don't have much of a math background, that's where I'm stuck. How do I find that center angle? Or is there a much simpler way of attacking this problem?

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >