Search Results

Search found 5104 results on 205 pages for 'evolutionary algorithm'.

Page 184/205 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Convert pre-IEEE-574 C++ floating-point numbers to/from C#

    - by Richard Kucia
    Before .Net, before math coprocessors, before IEEE-574, Microsoft defined a bit pattern for floating-point numbers. Old versions of the C++ compiler happily used that definition. I am writing a C# app that needs to read/write such floating-point numbers in a file. How can I do the conversions between the 2 bit formats? I need conversion methods in both directions. This app is going to run in a PocketPC/WinCE environment. Changing the structure of the file is out-of-scope for this project. Is there a C++ compiler option that instructs it to use the old FP format? That would be ideal. I could then exchange data between the C# code and C++ code by using a null-terminated text string, and the C++ methods would be simple wrappers around sprintf and atof functions. At the very least, I'm hoping someone can reply with the bit definitions for the old FP format, so I can put together a low-level bit manipulation algorithm if necessary. Thanks.

    Read the article

  • Is it possible to store pointers in shared memory without using offsets?

    - by Joseph Garvin
    When using shared memory, each process may mmap the shared region into a different area of their address space. This means that when storing pointers within the shared region, you need to store them as offsets of the start of the shared region. Unfortunately, this complicates use of atomic instructions (e.g. if you're trying to write a lock free algorithm). For example, say you have a bunch of reference counted nodes in shared memory, created by a single writer. The writer periodically atomically updates a pointer 'p' to point to a valid node with positive reference count. Readers want to atomically write to 'p' because it points to the beginning of a node (a struct) whose first element is a reference count. Since p always points to a valid node, incrementing the ref count is safe, and makes it safe to dereference 'p' and access other members. However, this all only works when everything is in the same address space. If the nodes and the 'p' pointer are stored in shared memory, then clients suffer a race condition: x = read p y = x + offset Increment refcount at y During step 2, p may change and x may no longer point to a valid node. The only workaround I can think of is somehow forcing all processes to agree on where to map the shared memory, so that real pointers rather than offsets can be stored in the mmap'd region. Is there any way to do that? I see MAP_FIXED in the mmap documentation, but I don't know how I could pick an address that would be safe.

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • How to store dynamic references to parts of a text

    - by Antoine L
    In fact, my question concerns an algorithm. I need to be able to attach annotations to certain parts of a text, like a word or a group of words. The first thing that came to me to do so is to store the position of this part (indexes) in the text. For instance, in the text 'The quick brown fox jumps over the lazy dog', I'd like to attach an annotation to 'quick brown fox', so the indexes of the annotation would be 4 - 14. But since the text is editable (other annotations could provoke a modification from text's author), the annoted part is likely to move (the indexes could change). In fact, I don't know how to update the indexes of the annoted part. What if the text becomes 'Everyday, the quick brown fox jumps over the lazy dog' ? I guess I have to watch every change of the text in the front-end application ? The front-end part of the application will be HTML with Javascript. I will be using PHP to develop the back-end part and every text and annotation will be stored in a database.

    Read the article

  • When is someone else's code I use from the internet "mine"?

    - by robault
    I'm building a library from methods that I've found on the internet. Some are free to use or modify with no requirements, others say that if I leave a comment in the code it's okay to use, others say when I use the code I have to attribute the use of someone's code in my application (in the credits for my app I guess). What I've been doing is reorganizing classes, renaming methods, adding descriptions (code comments), renaming the parameters and names inside the methods to something meaningful, optimizing loops if applicable, changing return types, adding try/catch/throw blocks, adding parameter checks and cleaning up resources in the methods. For example; I didn't come up with the algorithm for blurring a Bitmap but I've taken the basic example of iterating through the pixels and turned it into a decent library method (applying the aforementioned modifications). I understand how to go about building it now myself but I didn't actually hit the keystrokes to make it and I couldn't have come up with it before learning from their example. What about code people get in answers on Stackoverflow or examples from Codeproject? At what point can I drop their requirements because at n% their code became mine? FWIW I intend on using the libraries to create products that I will sell.

    Read the article

  • Laggy interface with NSSearchField hooked up to an NSArrayController via bindings

    - by Simone Manganelli
    So I've got an NSSearchField hooked up directly to an NSArrayController via bindings, attached to the filterPredicate, so that without any code, the user can just type in the NSSearchField and filter the list of objects in the NSArrayController presented to him in the interface (an NSCollectionView, to be specific). The NSSearchField is hooked up to provide live searching, so that the NSCollectionView is filtered instantly as the user types, not after waiting for a short period for the user to stop typing. However, the problem is that this makes the interface really laggy. Typing is delayed significantly, by 0.5-1 seconds, and it seems like the NSCollectionView is trying to animate each and every rearrangement of items for each portion of the search string that the user enters. What I'd like is for the searching to be live, but the typing in the search field to be fluid, and the results to filter as fast as possible. Is there a way to do this via bindings, or will I need to put in some custom code that triggers the filterPredicate on a separate thread? (Note that I've got a custom sorting algorithm set up on the NSArrayController, and removing it seems to help a bit with the laggyness, but not completely.)

    Read the article

  • Vectors rotations 3D camera tiliting

    - by TallGuy
    Hopefully easy answer, but I cannot get it. I have a 3D render engine I have written. I have the camera position, the lookat position and the up vector. I want to be able to "tilt" the camera left, right, up and down. Like a camera on a fixed tripod that you can grab the handle and tilt it it up, down, left right etc. The maths stumps me. I have been able to do forwards/backwards dolly and up/down/left/right panning, but cannot work out the vector math to get it to tilt. For left and right tilt I want to rotate the lookat position around the camera position, but I need to take into account the up vector, otherwise the rotation doesn't know which axis to to turn around. The maths/algorithm I need is along the lines of... Camera=(cx,cy,cz) Lookat=(lx,ly,lz) Up=(ux,uy,uz) RotatePointAroundVector(lx,ly,lz,ux,uy,uz,amount) Can anyone assist with the maths involved? Many thanks.

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • Creating a smart text generator

    - by royrules22
    I'm doing this for fun (or as 4chan says "for teh lolz") and if I learn something on the way all the better. I took an AI course almost 2 years ago now and I really enjoyed it but I managed to forget everything so this is a way to refresh that. Anyway I want to be able to generate text given a set of inputs. Basically this will read forum inputs (or maybe Twitter tweets) and then generate a comment based on the learning. Now the simplest way would be to use a Markov Chain Text Generator but I want something a little bit more complex than that as the MKC basically only learns by word order (which word is more likely to appear after word x given the input text). I'm trying to see if there's something I can do to make it a little bit more smarter. For example I want it to do something like this: Learn from a large selection of posts in a message board but don't weight it too much For each post: Learn from the other comments in that post and weigh these inputs higher Generate comment and post See what other users' reaction to your post was. If good weigh it positively so you make more posts that are similar to the one made, and vice versa if negative. It's the weighing and learning from mistakes part that I'm not sure how to implement. I thought about Artificial Neural Networks (mainly because I remember enjoying that chapter) but as far as I can tell that's mainly used to classify things (i.e. given a finite set of choices [x1...xn] which x is this given input) not really generate anything. I'm not even sure if this is possible or if it is what should I go about learning/figuring out. What algorithm is best suited for this? To those worried that I will use this as a bot to spam or provide bad answers to SO, I promise that I will not use this to provide (bad) advice or to spam for profit. I definitely will not post it's nonsensical thoughts on SO. I plan to use it for my own amusement. Thanks!

    Read the article

  • 500 Worker Threads, what kind of thread pool?

    - by Submerged
    I am wondering if this is the best way to do this. I have about 500 threads that run indefinitely, but Thread.sleep for a minute when done one cycle of processing. ExecutorService es = Executors.newFixedThreadPool(list.size()+1); for (int i = 0; i < list.size(); i++) { es.execute(coreAppVector.elementAt(i)); //coreAppVector is a vector of extends thread objects } The code that is executing is really simple and basically just this class aThread extends Thread { public void run(){ while(true){ Thread.sleep(ONE_MINUTE); //Lots of computation every minute } } } I do need a separate threads for each running task, so changing the architecture isn't an option. I tried making my threadPool size equal to Runtime.getRuntime().availableProcessors() which attempted to run all 500 threads, but only let 8 (4xhyperthreading) of them execute. The other threads wouldn't surrender and let other threads have their turn. I tried putting in a wait() and notify(), but still no luck. If anyone has a simple example or some tips, I would be grateful! Well, the design is arguably flawed. The threads implement Genetic-Programming or GP, a type of learning algorithm. Each thread analyzes advanced trends makes predictions. If the thread ever completes, the learning is lost. That said, I was hoping that sleep() would allow me to share some of the resources while one thread isn't "learning"

    Read the article

  • How to functionally generate a tree breadth-first. (With Haskell)

    - by Dennetik
    Say I have the following Haskell tree type, where "State" is a simple wrapper: data Tree a = Branch (State a) [Tree a] | Leaf (State a) deriving (Eq, Show) I also have a function "expand :: Tree a - Tree a" which takes a leaf node, and expands it into a branch, or takes a branch and returns it unaltered. This tree type represents an N-ary search-tree. Searching depth-first is a waste, as the search-space is obviously infinite, as I can easily keep on expanding the search-space with the use of expand on all the tree's leaf nodes, and the chances of accidentally missing the goal-state is huge... thus the only solution is a breadth-first search, implemented pretty decent over here, which will find the solution if it's there. What I want to generate, though, is the tree traversed up to finding the solution. This is a problem because I only know how to do this depth-first, which could be done by simply called the "expand" function again and again upon the first child node... until a goal-state is found. (This would really not generate anything other then a really uncomfortable list.) Could anyone give me any hints on how to do this (or an entire algorithm), or a verdict on whether or not it's possible with a decent complexity? (Or any sources on this, because I found rather few.)

    Read the article

  • Where to post code for open source usage?

    - by Douglas
    I've been working for a few weeks now with the Google Maps API v3, and have done a good bit of development for the map I've been creating. Some of the things I've done have had to be done to add usability where there previously was not any, at least not that I could find online. Essentially, I made a list of what had to be done, searched all over the web for the ways to do what I needed, and found that some were not(at the time) possible(in the "grab an example off the web" sense). Thus, in my working on this map, I have created a number of very useful tools, which I would like to share with the development community. Is there anywhere I could use as a hub, apart from my portfolio ( http://dougglover.com ), to allow people to view and recycle my work? I know how hard it can be to need to do something, and be unable to find the solution elsewhere, and I don't think that if something has been done before, it should necessarily need to be written again and again. Hence open source code, right? Firstly, I was considering coming on here and asking a question, and then just answering it. Problem there is I assume that would just look like a big reputation grab. If not, please let me know and I'll go ahead and do that so people here can see it. Other suggestions appreciated. Some stuff I've made: A (new and improved) LatLng generator Works quicker, generates LatLng based on position of a draggable marker Allows searching for an address to place the marker on/near the desired location(much better than having to scroll to your location all the way from Siberia) Since it's a draggable marker, double-clicking zooms in, instead of creating a new LatLng marker like the one I was originally using The ability to create entirely custom "Smart Paths" Plot LatLng points on the map which connect to each other just like they do using the actual Google Maps Using Dijkstra's algorithm with Javascript, the routing is intelligent and always gives the shortest possible route, using the points provided Simple, easy to read multi-dimensional array system allows for easily adding new points to the grid Any suggestions, etc. appreciated.

    Read the article

  • propositional theorems

    - by gcc
    wang's theorem to prove theorems in propositonal calculus Label Sequent Comment S1: P ? Q, Q ? R, ¬R ? ¬P Initial sequent. S2: ¬P ? Q, ¬Q ? R, ¬R ? ¬P Two applications of R5. S3: ¬P ? Q, ¬Q ? R ? ¬P, R Rl. S4: ¬P, ¬Q ? R ? ¬P, R S4 and S5 are obtained from S3 with R3. Note that S4 is an axiom since P appears on both sides of the sequent ar- row at the top level. S5: Q, ¬Q ? R ? ¬P, R The other sequent generated by the ap- plication of R3. S6: Q, ¬Q ? ¬P, R S6 and S7 are obtained from S5 using R3. S7: Q, R ? ¬P, R This is an axiom. S8: Q ? ¬P, R, Q Obtained from S6 using R1. S8 is an axiom. The original sequent is now proved, since it has successfully been transformed into a set of three axioms with no unproved sequents left over. //R1,R2,R3,R4,R5 is transformation myhomework is so complicate as you can see and i must finish homework in 4 days i just want help ,can you show me how i should construct the algorithm or how i should take and store the input

    Read the article

  • Create a new delegate class for each asynchronous image download?

    - by Charles S.
    First, I'm using an NSURLConnection to download JSON data from twitter. Then, I'm using a second NSURLConnection to download corresponding user avatar images (the urls to the images are parsed from the first data download). For the first data connection, I have my TwitterViewController set as the NSURLConnection delegate. I've created a separate class (ImageDownloadDelegate) to function as the delegate for a second NSURLConnection that handles the images. After the tweets are finished downloading, I'm using this code to get the avatars: for(int j=0; j<[self.tweets count]; j++){ ImageDownloadDelegate *imgDelegate = [[ImageDownloadDelegate alloc] init]; Tweet *myTweet = [self.tweets objectAtIndex:j]; imgDelegate.tweet = myTweet; imgDelegate.table = timeline; NSURLRequest* request = [NSURLRequest requestWithURL:[NSURL URLWithString:myTweet.imageURL] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60]; imgConnection = [[NSURLConnection alloc] initWithRequest:request delegate:imgDelegate]; [imgDelegate release]; } So basically a new instance of the delegate class is created for each image that needs to be downloaded. Is this the best way to go about this? But then there's no way to figure out which image is associate with which tweet, correct? The algorithm works fine... I'm just wondering if I'm going about it the most efficient way.

    Read the article

  • How do I create efficient instance variable mutators in Matlab?

    - by Trent B
    Previously, I implemented mutators as follows, however it ran spectacularly slowly on a recursive OO algorithm I'm working on, and I suspected it may have been because I was duplicating objects on every function call... is this correct? %% Example Only obj2 = tripleAllPoints(obj1) obj.pts = obj.pts * 3; obj2 = obj1 end I then tried implementing mutators without using the output object... however, it appears that in MATLAB i can't do this - the changes won't "stick" because of a scope issue? %% Example Only tripleAllPoints(obj1) obj1.pts = obj1.pts * 3; end For application purposes, an extremely simplified version of my code (which uses OO and recursion) is below. classdef myslice properties pts % array of pts nROW % number of rows nDIM % number of dimensions subs % sub-slices end % end properties methods function calcSubs(obj) obj.subs = cell(1,obj.nROW); for i=1:obj.nROW obj.subs{i} = myslice; obj.subs{i}.pts = obj.pts(1:i,2:end); end end function vol = calcVol(obj) if obj.nROW == 1 obj.volume = prod(obj.pts); else obj.volume = 0; calcSubs(obj); for i=1:obj.nROW obj.volume = obj.volume + calcVol(obj.subs{i}); end end end end % end methods end % end classdef

    Read the article

  • Best way to detect similar email addresses?

    - by Chris
    I have a list of ~20,000 email addresses, some of which I know to be fraudulent attempts to get around a "1 per e-mail" limit. ([email protected], [email protected], [email protected], etc...). I want to find similar email addresses for evaluation. Currently I'm using a levenshtein algorithm to check each e-mail against the others in the list and report any with an edit distance of less than 2. However, this is painstakingly slow. Is there a more efficient approach? The test code I'm using now is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; namespace LevenshteinAnalyzer { class Program { const string INPUT_FILE = @"C:\Input.txt"; const string OUTPUT_FILE = @"C:\Output.txt"; static void Main(string[] args) { var inputWords = File.ReadAllLines(INPUT_FILE); var outputWords = new SortedSet<string>(); for (var i = 0; i < inputWords.Length; i++) { if (i % 100 == 0) Console.WriteLine("Processing record #" + i); var word1 = inputWords[i].ToLower(); for (var n = i + 1; n < inputWords.Length; n++) { if (i == n) continue; var word2 = inputWords[n].ToLower(); if (word1 == word2) continue; if (outputWords.Contains(word1)) continue; if (outputWords.Contains(word2)) continue; var distance = LevenshteinAlgorithm.Compute(word1, word2); if (distance <= 2) { outputWords.Add(word1); outputWords.Add(word2); } } } File.WriteAllLines(OUTPUT_FILE, outputWords.ToArray()); Console.WriteLine("Found {0} words", outputWords.Count); } } }

    Read the article

  • Postback not working with ASP.NET Routing (Validation of viewstate MAC failed)

    - by Robert
    Hi. I'm using the ASP.NET 3.5 SP1 System.Web.Routing with classic WebForms, as described in http://chriscavanagh.wordpress.com/2008/04/25/systemwebrouting-with-webforms-sample/ All works fine, I have custom SEO urls and even the postback works. But there is a case where the postback always fails and I get a: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Here is the scenario to reproduce the error: Create a standard webform mypage.aspx with a button Create a Route that maps "a/b/{id}" to "~/mypage.aspx" When you execute the site, you can navigate http://localhost:XXXX/a/b/something the page works. But when you press the button you get the error. The error doen't happen when the Route is just "a/{id}". It seems to be related to the number of sub-paths in the url. If there are at least 2 sub-paths the viewstate validation fails. You get the error even with EnableViewStateMac="false". Any ideas? Is it a bug? Thanks

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • Given a date range how to calculate the number of weekends partially or wholly within that range?

    - by andybak
    Given a date range how to calculate the number of weekends partially or wholly within that range? (A few definitions as requested: take 'weekend' to mean Saturday and Sunday. The date range is inclusive i.e. the end date is part of the range 'wholly or partially' means that any part of the weekend falling within the date range means the whole weekend is counted.) To simplify I imagine you only actually need to know the duration and what day of the week the initial day is... I darn well now it's going to involve doing integer division by 7 and some logic to add 1 depending on the remainder but I can't quite work out what... extra points for answers in Python ;-) Edit Here's my final code. Weekends are Friday and Saturday (as we are counting nights stayed) and days are 0-indexed starting from Monday. I used onebyone's algorithm and Tom's code layout. Thanks a lot folks. def calc_weekends(start_day, duration): days_until_weekend = [5, 4, 3, 2, 1, 1, 6] adjusted_duration = duration - days_until_weekend[start_day] if adjusted_duration < 0: weekends = 0 else: weekends = (adjusted_duration/7)+1 if start_day == 5 and duration % 7 == 0: #Saturday to Saturday is an exception weekends += 1 return weekends if __name__ == "__main__": days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'] for start_day in range(0,7): for duration in range(1,16): print "%s to %s (%s days): %s weekends" % (days[start_day], days[(start_day+duration) % 7], duration, calc_weekends(start_day, duration)) print

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • authentication question (security code generation logic)

    - by Stick it to THE MAN
    I have a security number generator device, small enough to go on a key-ring, which has a six digit LCD display and a button. After I have entered my account name and password on an online form, I press the button on the security device and enter the security code number which is displayed. I get a different number every time I press the button and the number generator has a serial number on the back which I had to input during the account set-up procedure. I would like to incorporate similar functionality in my website. As far as I understand, these are the main components: Generate a unique N digit aplha-numeric sequence during registration and assign to user (permanently) Allow user to generate an N (or M?) digit aplha-numeric sequence remotely For now, I dont care about the hardware side, I am only interested in knowing how I may choose a suitable algorithm that will allow the user to generate an N (or M?) long aplha-numeric sequence - presumably, using his unique ID as a seed Identify the user from the number generated in step 2 (which decryption method is the most robust to do this?) I have the following questions: Have I identified all the steps required in such an authentication system?, if not please point out what I have missed and why it is important What are the most robust encryption/decryption algorithms I can use for steps 1 through 3 (preferably using 64bits)?

    Read the article

  • Doctrine/symfony: getSqlQuery() output in phpMyAdmin/SQL tab

    - by user248959
    Hi, i have created this query that works OK: $q1 = Doctrine_Query::create() ->from('Usuario u') ->leftJoin('u.AmigoUsuario a ON u.id = a.user2_id OR u.id = a.user1_id') ->where("a.user2_id = ? OR a.user1_id = ?", array($id,$id)) ->andWhere("u.id <> ?", $id) ->andWhere("a.estado LIKE ?", 1); echo $q1->getSqlQuery(); The calling to getSqlQuery outputs this clause: SELECT s.id AS s_id, s.username AS s_username, s.algorithm AS s_algorithm, s.salt AS s_salt, s.password AS s__password, s.is_active AS s__is_active, s.is_super_admin AS s__is_super_admin, s.last_login AS s__last_login, s.email_address AS s__email_address, s.nombre_apellidos AS s__nombre_apellidos, s.sexo AS s__sexo, s.fecha_nac AS s__fecha_nac, s.provincia AS s_provincia, s.localidad AS s_localidad, s.fotografia AS s_fotografia, s.avatar AS s_avatar, s.avatar_mensajes AS s__avatar_mensajes, s.created_at AS s__created_at, s.updated_at AS s__updated_at, a.id AS a__id, a.user1_id AS a__user1_id, a.user2_id AS a__user2_id, a.estado AS a__estado FROM sf_guard_user s LEFT JOIN amigo_usuario a ON ((s.id = a.user2_id OR s.id = a.user1_id)) WHERE ((a.user2_id = ? OR a.user1_id = ?) AND s.id < ? AND a.estado LIKE ?) If i take that clause to phpmyadmin SQL tab i get this error 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '? OR a.user1_id = ?) AND s.id < ? AND a.estado LIKE ?) LIMIT 0, 30' at line 1 Why i'm getting this error? Regards Javi

    Read the article

  • Mulit-dimensional array edge/border conditions

    - by kirbuchi
    Hi, I'm iterating over a 3 dimensional array (which is an image with 3 values for each pixel) to apply a 3x3 filter to each pixel as follows: //For each value on the image for (i=0;i<3*width*height;i++){ //For each filter value for (j=0;j<9;j++){ if (notOutsideEdgesCondition){ *(**(outArray)+i)+= *(**(pixelArray)+i-1+(j%3)) * (*(filter+j)); } } } I'm using pointer arithmetic because if I used array notation I'd have 4 loops and I'm trying to have the least possible number of loops. My problem is my notOutsideEdgesCondition is getting quite out of hands because I have to consider 8 border cases. I have the following handled conditions Left Column: ((i%width)==0) && (j%3==0) Right Column: ((i-1)%width ==0) && (i>1) && (j%3==2) Upper Row: (i<width) && (j<2) Lower Row: (i>(width*height-width)) && (j>5) and still have to consider the 4 corner cases which will have longer expressions. At this point I've stopped and asked myself if this is the best way to go because If I have a 5 line long conditional evaluation it'll not only be truly painful to debug but will slow the inner loop. That's why I come to you to ask if there's a known algorithm to handle this cases or if there's a better approach for my problem. Thanks a lot.

    Read the article

  • How to transform a production to LL(1) for a list separated by a semicolon?

    - by Subb
    Hi, I'm reading this introductory book on parsing (which is pretty good btw) and one of the exercice is to "build a parser for your favorite language." Since I don't want to die today, I thought I could do a parser for something relatively simple, ie a simplified CSS. Note: This book teach you how to right a LL(1) parser using the recursive-descent algorithm. So, as a sub-exercice, I am building the grammar from what I know of CSS. But I'm stuck on a production that I can't transform in LL(1) : //EBNF block = "{", declaration, {";", declaration}, [";"], "}" //BNF <block> =:: "{" <declaration> "}" <declaration> =:: <single-declaration> <opt-end> | <single-declaration> ";" <declaration> <opt-end> =:: "" | ";" This describe a CSS block. Valid block can have the form : { property : value } { property : value; } { property : value; property : value } { property : value; property : value; } ... The problem is with the optional ";" at the end, because it overlap with the starting character of {";", declaration}, so when my parser meet a semicolon in this context, it doesn't know what to do. The book talk about this problem, but in its example, the semicolon is obligatory, so the rule can be modified like this : block = "{", declaration, ";", {declaration, ";"}, "}" So, Is it possible to achieve what I'm trying to do using a LL(1) parser?

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >