Search Results

Search found 5104 results on 205 pages for 'evolutionary algorithm'.

Page 186/205 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Problem with Boost::Asio for C++

    - by Martin Lauridsen
    Hi there, For my bachelors thesis, I am implementing a distributed version of an algorithm for factoring large integers (finding the prime factorisation). This has applications in e.g. security of the RSA cryptosystem. My vision is, that clients (linux or windows) will download an application and compute some numbers (these are independant, thus suited for parallelization). The numbers (not found very often), will be sent to a master server, to collect these numbers. Once enough numbers have been collected by the master server, it will do the rest of the computation, which cannot be easily parallelized. Anyhow, to the technicalities. I was thinking to use Boost::Asio to do a socket client/server implementation, for the clients communication with the master server. Since I want to compile for both linux and windows, I thought windows would be as good a place to start as any. So I downloaded the Boost library and compiled it, as it said on the Boost Getting Started page: bootstrap .\bjam It all compiled just fine. Then I try to compile one of the tutorial examples, client.cpp, from Asio, found (here.. edit: cant post link because of restrictions). I am using the Visual C++ compiler from Microsoft Visual Studio 2008, like this: cl /EHsc /I D:\Downloads\boost_1_42_0 client.cpp But I get this error: /out:client.exe client.obj LINK : fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-s-1_42.lib' Anyone have any idea what could be wrong, or how I could move forward? I have been trying pretty much all week, to get a simple client/server socket program for c++ working, but with no luck. Serious frustration kicking in. Thank you in advance.

    Read the article

  • Find Adjacent Nodes A Star Path-Finding C++

    - by Infinity James
    Is there a better way to handle my FindAdjacent() function for my A Star algorithm? It's awfully messy, and it doesn't set the parent node correctly. When it tries to find the path, it loops infinitely because the parent of the node has a pent of the node and the parents are always each other. Any help would be amazing. This is my function: void AStarImpl::FindAdjacent(Node* pNode) { for (int i = -1; i <= 1; i++) { for (int j = -1; j <= 1; j++) { if (pNode->mX != Map::GetInstance()->mMap[pNode->mX + i][pNode->mY + j].mX || pNode->mY != Map::GetInstance()->mMap[pNode->mX + i][pNode->mY + j].mY) { if (pNode->mX + i <= 14 && pNode->mY + j <= 14) { if (pNode->mX + i >= 0 && pNode->mY + j >= 0) { if (Map::GetInstance()->mMap[pNode->mX + i][pNode->mY + j].mTypeID != NODE_TYPE_SOLID) { if (find(mOpenList.begin(), mOpenList.end(), &Map::GetInstance()->mMap[pNode->mX + i][pNode->mY + j]) == mOpenList.end()) { Map::GetInstance()->mMap[pNode->mX+i][pNode->mY+j].mParent = &Map::GetInstance()->mMap[pNode->mX][pNode->mY]; mOpenList.push_back(&Map::GetInstance()->mMap[pNode->mX+i][pNode->mY+j]); } } } } } } } mClosedList.push_back(&Map::GetInstance()->mMap[pNode->mX][pNode->mY]); } If you'd like any more code, just ask and I can post it.

    Read the article

  • How to change the date/time in Python for all modules?

    - by Felix Schwarz
    When I write with business logic, my code often depends on the current time. For example the algorithm which looks at each unfinished order and checks if an invoice should be sent (which depends on the no of days since the job was ended). In these cases creating an invoice is not triggered by an explicit user action but by a background job. Now this creates a problem for me when it comes to testing: I can test invoice creation itself easily However it is hard to create an order in a test and check that the background job identifies the correct orders at the correct time. So far I found two solutions: In the test setup, calculate the job dates relative to the current date. Downside: The code becomes quite complicated as there are no explicit dates written anymore. Sometimes the business logic is pretty complex for edge cases so it becomes hard to debug due to all these relative dates. I have my own date/time accessor functions which I use throughout my code. In the test I just set a current date and all modules get this date. So I can simulate an order creation in February and check that the invoice is created in April easily. Downside: 3rd party modules do not use this mechanism so it's really hard to integrate+test these. The second approach was way more successful to me after all. Therefore I'm looking for a way to set the time Python's datetime+time modules return. Setting the date is usually enough, I don't need to set the current hour or second (even though this would be nice). Is there such a utility? Is there an (internal) Python API that I can use?

    Read the article

  • Is learning C++ a good idea?

    - by chang
    The more I hear and read about C++ (e.g. this: http://lwn.net/Articles/249460/), I get the impression, that I'd waste my time learning C++. I some wrote network routing algorithm in C++ for a simulator, and it was a pain (as expected, especially coming from a perl/python/Java background ...). I'm never happy about giving up on some technology, but I would be happy, if I could limit my knowledge of C-family languages to just C, C# and Objective-C (even OS Xs Cocoa, which is huge and takes a lot of time to learn looks like joy compared to C++ ...). Do I need to consider myself dumb or unwilling, just because I'm not partial to the pain involved learning this stuff? Technologies advance and there will be options other than C++, when deciding on implementation languages, or not? And for speed: If speed were that critical, I'd go for a plain C implementation instead, or write C extensions for much more productive languages like ruby or python ... The one-line version of the above: Will C++ stay such a relevant language that every committed programmer should be familiar with it? [ edit / thank you very much for your interesting and useful answers so far .. ] [ edit / .. i am accepting the top-rated answer; thanks again for all answers! ]

    Read the article

  • iPhone: How to Determine Average Light/Dark of an Area of an UIImage

    - by TechZen
    I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync. I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values. Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this? I've thought of two possible approaches: Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity? Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure. I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.

    Read the article

  • String Manipulation in C

    - by baris_a
    Hi guys, I am helping my nephew for his C lab homework, it is a string manipulation assignment and applying Wang's algorithm. Here is the BNF representation for the input. <sequent> ::= <lhs> # <rhs> <lhs> ::= <formulalist>| e <rhs> ::= <formulalist>| e <formulalist> ::= <formula>|<formula> , <formulalist> <formula> ::= <letter>| - <formula>| (<formula><in?xop><formula>) <in?xop> ::= & | | | > <letter> ::= A | B | ... | Z What is the best practice to handle and parse this kind of input in C? How can I parse this structure without using struct? Thanks in advance.

    Read the article

  • How do I optimize this postfix expression tree for speed?

    - by Peter Stewart
    Thanks to the help I received in this post: I have a nice, concise recursive function to traverse a tree in postfix order: deque <char*> d; void Node::postfix() { if (left != __nullptr) { left->postfix(); } if (right != __nullptr) { right->postfix(); } d.push_front(cargo); return; }; This is an expression tree. The branch nodes are operators randomly selected from an array, and the leaf nodes are values or the variable 'x', also randomly selected from an array. char *values[10]={"1.0","2.0","3.0","4.0","5.0","6.0","7.0","8.0","9.0","x"}; char *ops[4]={"+","-","*","/"}; As this will be called billions of times during a run of the genetic algorithm of which it is a part, I'd like to optimize it for speed. I have a number of questions on this topic which I will ask in separate postings. The first is: how can I get access to each 'cargo' as it is found. That is: instead of pushing 'cargo' onto a deque, and then processing the deque to get the value, I'd like to start processing it right away. I don't yet know about parallel processing in c++, but this would ideally be done concurrently on two different processors. In python, I'd make the function a generator and access succeeding 'cargo's using .next(). But I'm using c++ to speed up the python implementation. I'm thinking that this kind of tree has been around for a long time, and somebody has probably optimized it already. Any Ideas? Thanks

    Read the article

  • Are these two functions the same?

    - by Ranhiru
    There is a function in the AES algorithm, to multiply a byte by 2 in Galois Field. This is the function given in a website private static byte gfmultby02(byte b) { if (b < 0x80) return (byte)(int)(b <<1); else return (byte)( (int)(b << 1) ^ (int)(0x1b) ); } This is the function i wrote. private static byte MulGF2(byte x) { if (x < 0x80) return (byte)(x << 1); else { return (byte)((x << 1) ^ 0x1b); } } What i need to know is, given any byte whether this will perform in the same manner. Actually I am worried about the extra conversion of to int and then again to byte. So far I have tested and it looks fine. Does the extra cast to int and then to byte make a difference?

    Read the article

  • Creating subtree from tree which is represented in xml - python

    - by Jay
    Hi I have an XML (in the form of tree), I require to create sub-tree out of it. For ex: <a> <b> <c>Hello</c> <d> <e>Hi</e> </a> Subtree would be <root> <a> <b> <c>Hello</c> </b> </a> <a> <d> <e>Hi</e> </d> </a> </root> What is the best XML library in python to do it? Any algorithm that already does this would also be helpful. Note: the XML doc won't be that big, it will easily fit in memory.

    Read the article

  • Flushing writes in buffer of Memory Controller to DDR device

    - by Rohit
    At some point in my code, I need to push the writes in my code all the way to the DIMM or DDR device. My requirement is to ensure the write reaches the row,ban,column of the DDR device on the DIMM. I need to read what I've written to the main memory. I do not want caching to get me the value. Instead after writing I want to fetch this value from main memory(DIMM's). So far I've been using Intel's x86 instruction wbinvd(write back and invalidate cache). However this means the caches and TLB are flushed. Write-back requests go to the main memory. However, there is a reasonable amount of time this data might reside in the write buffer of the Memory Controller( Intel calls it integrated memory controller or IMC). The Memory Controller might take some more time depending on the algorithm that runs in the Memory Controller to handle writes. Is there a way I force all existing or pending writes in the write buffer of the memory controller to the DRAM devices ?? What I am looking for is something more direct and more low-level than wbinvd. If you could point me to right documents or specs that describe this I would be grateful. Generally, the IMC has a several registers which can be written or read from. From looking at the specs for that for the chipset I could not find anything useful. Thanks for taking the time to read this.

    Read the article

  • Can't decrypt after encrypting with blowfish Java

    - by user2030599
    Hello i'm new to Java and i have the following problem: i'm trying to encrypt the password of a user using the blowfish algorithm, but when i try to decrypt it back to check the authentication it fails to decrypt it for some reason. public static String encryptBlowFish(String to_encrypt, String salt){ String dbpassword = null; try{ SecretKeySpec skeySpec = new SecretKeySpec( salt.getBytes(), "Blowfish" ); // Instantiate the cipher. Cipher cipher = Cipher.getInstance("Blowfish/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, skeySpec); //byte[] encrypted = cipher.doFinal( URLEncoder.encode(data).getBytes() ); byte[] encrypted = cipher.doFinal( to_encrypt.getBytes() ); dbpassword = new String(encrypted); } catch (Exception e) { System.out.println("Exception while encrypting"); e.printStackTrace(); dbpassword = null; } finally { return dbpassword; } } public static String decryptBlowFish(String to_decrypt, String salt){ String dbpassword = null; try{ SecretKeySpec skeySpec = new SecretKeySpec( salt.getBytes(), "Blowfish" ); // Instantiate the cipher. Cipher cipher = Cipher.getInstance("Blowfish/CBC/PKCS5Padding"); cipher.init(Cipher.DECRYPT_MODE, skeySpec); //byte[] encrypted = cipher.doFinal( URLEncoder.encode(data).getBytes() ); byte[] encrypted = cipher.doFinal( to_decrypt.getBytes() ); dbpassword = new String(encrypted); } catch (Exception e) { System.out.println("Exception while decrypting"); e.printStackTrace(); dbpassword = null; } finally { return dbpassword; } } When i call the decrypt function it gives me the following error: java.security.InvalidKeyException: Parameters missing Any ideas? Thank you

    Read the article

  • What about race condition in multithreaded reading?

    - by themoob
    Hi, According to an article on IBM.com, "a race condition is a situation in which two or more threads or processes are reading or writing some shared data, and the final result depends on the timing of how the threads are scheduled. Race conditions can lead to unpredictable results and subtle program bugs." . Although the article concerns Java, I have in general been taught the same definition. As far as I know, simple operation of reading from RAM is composed of setting the states of specific input lines (address, read etc.) and reading the states of output lines. This is an operation that obviously cannot be executed simultaneously by two devices and has to be serialized. Now let's suppose we have a situation when a couple of threads access an object in memory. In theory, this access should be serialized in order to prevent race conditions. But e.g. the readers/writers algorithm assumes that an arbitrary number of readers can use the shared memory at the same time. So, the question is: does one have to implement an exclusive lock for read when using multithreading (in WinAPI e.g.)? If not, why? Where is this control implemented - OS, hardware? Best regards, Kuba

    Read the article

  • Decimal problem in Java

    - by Jerome
    I am experimenting with writing a java class for financial engineering (hobby of mine). The algorithm that I desire to implement is: 100 / 0.05 = 2000 Instead I get: 100 / 0.05 = 2000.9999999999998 I understand the problem to be one with converting from binary numbers to decimals (float -- int). I have looked at the BigDecimal class on the web but have not quite found what I am looking for. Attached is the program and a class that I wrote: // DCF class public class DCF { double rate; double principle; int period; public DCF(double $rate, double $principle, int $period) { rate = $rate; principle = $principle; period = $period; } // returns the console value public double consol() { return principle/rate; } // to string Method public String toString() { return "(" + rate + "," + principle + "," + period + ")"; } } Now the actual program: // consol program public class DCFmain { public static void main(String[] args) { // create new DCF DCF abacus = new DCF(0.05, 100.05, 5); System.out.println(abacus); System.out.println("Console value= " + abacus.consol() ); } } Output: (0.05,100.05,5) Console value= 2000.9999999999998 Thanks!

    Read the article

  • Parallel.For Batching

    - by chibacity
    Is there built-in support in the TPL for batching operations? I was recently playing with a routine to carry out character replacement on a character array which required a lookup table i.e. transliteration: for (int i = 0; i < chars.Length; i++) { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } } I could see that this could be trivially parallelized, so jumped in with a first stab which I knew would perform worse as the tasks were too fine-grained: Parallel.For(0, chars.Length, i => { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } }); I then reworked the algorithm to use batching so that the work could be chunked onto different threads in less fine-grained batches. This made use of threads as expected and I got some near linear speed up. I'm sure that there must be built-in support for batching in the TPL. What is the syntax, and how do I use it? const int CharBatch = 100; int charLen = chars.Length; Parallel.For(0, ((charLen / CharBatch) + 1), i => { int batchUpper = ((i + 1) * CharBatch); for (int j = i * CharBatch; j < batchUpper && j < charLen; j++) { char replaceChar; if (lookup.TryGetValue(chars[j], out replaceChar)) { chars[j] = replaceChar; } } });

    Read the article

  • Am I going the right way to make login system secure with this simple password salting?

    - by LoVeSmItH
    I have two fields in login table password salt And I have this little function to generate salt function random_salt($h_algo="sha512"){ $salt1=uniqid(rand(),TRUE); $salt2=date("YmdHis").microtime(true); if(function_exists('dechex')){ $salt2=dechex($salt2); } $salt3=$_SERVER['REMOTE_ADDR']; $salt=$salt1.$salt2.$salt3; if(function_exists('hash')){ $hash=(in_array($h_algo,hash_algos()))?$h_algo:"sha512"; $randomsalt=hash($hash,md5($salt)); //returns 128 character long hash if sha512 algorithm is used. }else{ $randomsalt=sha1(md5($salt)); //returns 40 characters long hash } return $randomsalt; } Now to create user password I have following $userinput=$_POST["password"] //don't bother about escaping, i have done it in my real project. $static_salt="THIS-3434-95456-IS-RANDOM-27883478274-SALT"; //some static hard to predict secret salt. $salt=random_salt(); //generates 128 character long hash. $password =sha1($salt.$userinput.$static_salt); $salt is saved in salt field of database and $password is saved in password field. My problem, In function random_salt(), I m having this FEELING that I'm just making things complicated while this may not generate secure salt as it should. Can someone throw me a light whether I m going in a right direction? P.S. I do have an idea about crypt functions and like such. Just want to know is my code okay? Thanks.

    Read the article

  • Simplification / optimization of GPS track

    - by GreyCat
    I've got a GPS track, produces by gpxlogger(1) (supplied as a client for gpsd). GPS receiver updates its coordinates every 1 second, gpxlogger's logic is very simple, it writes down location (lat, lon, ele) and a timestamp (time) received from GPS every n seconds (n = 3 in my case). After writing down a several hours worth of track, gpxlogger saves several megabyte long GPX file that includes several thousands of points. Afterwards, I try to plot this track on a map and use it with OpenLayers. It works, but several thousands of points make using the map a sloppy and slow experience. I understand that having several thousands of points of suboptimal. There are myriads of points that can be deleted without losing almost anything: when there are several points making up roughly the straight line and we're moving with the same constant speed between them, we can just leave the first and the last point and throw anything else. I thought of using gpsbabel for such track simplification / optimization job, but, alas, it's simplification filter works only with routes, i.e. analyzing only geometrical shape of path, without timestamps (i.e. not checking that the speed was roughly constant). Is there some ready-made utility / library / algorithm available to optimize tracks? Or may be I'm missing some clever option with gpsbabel?

    Read the article

  • LINQ : How to query how to sort result by most similarity/equality

    - by aNui
    I want to do a search for Music instruments which has its informations Name, Category and Origin as I asked in my post. But now I want to sort/group the result by similarity/equality to the keyword such as. If I have the list { Harp, Piano, Drum, Guitar, Guitarrón } and if I queried "p" the result should be { Piano, Harp } but it shows Harp first because of the list's sequence and if I add {Grand Piano} to the list and query "piano" the result shoud be like { Piano, Grand Piano } here's my code static IEnumerable<MInstrument> InstrumentsSearch(IEnumerable<MInstrument> InstrumentsList, string query, MInstrument.Category[] SelectedCategories, MInstrument.Origin[] SelectedOrigins) { var result = InstrumentsList .Where(item => SelectedCategories.Contains(item.category)) .Where(item => SelectedOrigins.Contains(item.origin)) .Where(item => { if ( (" " + item.Name.ToLower()).Contains(" " + query.ToLower()) || item.Name.IndexOf(query) != -1 ) { return true; } return false; } ) .Take(30); return result.ToList<MInstrument>(); } Or the result may be like my old self-invented algorithm that I called "by order of occurence", that is just OK to me. Is there any way to do that, please tell me. Thanks in advance.

    Read the article

  • How to improve multi-threaded access to Cache (custom implementation)

    - by Andy
    I have a custom Cache implementation, which allows to cache TCacheable<TKey> descendants using LRU (Least Recently Used) cache replacement algorithm. Every time an element is accessed, it is bubbled up to the top of the LRU queue using the following synchronized function: // a single instance is created to handle all TCacheable<T> elements public class Cache() { private object syncQueue = new object(); private void topQueue(TCacheable<T> el) { lock (syncQueue) if (newest != el) { if (el.elder != null) el.elder.newer = el.newer; if (el.newer != null) el.newer.elder = el.elder; if (oldest == el) oldest = el.newer; if (oldest == null) oldest = el; if (newest != null) newest.newer = el; el.newer = null; el.elder = newest; newest = el; } } } The bottleneck in this function is the lock() operator, which limits cache access to just one thread at a time. Question: Is it possible to get rid of lock(syncQueue) in this function while still preserving the queue integrity?

    Read the article

  • Exercise for exam about comparing string on file

    - by Telmo Vaz
    I'm trying to do this exercise for my exam tomorrow. I need to compare a string of my own input and see if that string is appearing on the file. This needs to be done directly on the file, so I cannot extract the string to my program and compare them "indirectly". I found this way but I'm not getting it right, and I don't know why. The algorithm sounds good to me. Any help, please? I really need to focus on this one. Thanks in advance, guys. #include<stdio.h> void comp(); int main(void) { comp(); return 0; } void comp() { FILE *file = fopen("e1.txt", "r+"); if(!file) { printf("Not possible to open the file"); return; } char src[50], ch; short i, len; fprintf(stdout, "What are you looking for? \nwrite: "); fgets(src, 200, stdin); len = strlen(src); while((ch = fgetc(file)) != EOF) { i = 0; while(ch == src[i]) { if(i <= len) { printf("%c - %c", ch, src[i]); fseek(file, 0, SEEK_CUR + 1); i++; } else break; } } }

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

  • Thread-safety of read-only memory access

    - by Edmund
    I've implemented the Barnes-Hut gravity algorithm in C as follows: Build a tree of clustered stars. For each star, traverse the tree and apply the gravitational forces from each applicable node. Update the star velocities and positions. Stage 2 is the most expensive stage, and so is implemented in parallel by dividing the set of stars. E.g. with 1000 stars and 2 threads, I have one thread processing the first 500 stars and the second thread processing the second 500. In practice this works: it speeds the computation by about 30% with two threads on a two-core machine, compared to the non-threaded version. Additionally, it yields the same numerical results as the original non-threaded version. My concern is that the two threads are accessing the same resource (namely, the tree) simultaneously. I have not added any synchronisation to the thread workers, so it's likely they will attempt to read from the same location at some point. Although access to the tree is strictly read-only I am not 100% sure it's safe. It has worked when I've tested it but I know this is no guarantee of correctness! Questions Do I need to make a private copy of the tree for each thread? Even if it is safe, are there performance problems of accessing the same memory from multiple threads?

    Read the article

  • How to keep Java Frame from waiting?

    - by pypmannetjies
    I am writing a genetic algorithm that approximates an image with a polygon. While going through the different generations, I'd like to output the progress to a JFrame. However, it seems like the JFrame waits until the GA's while loop finishes to display something. I don't believe it's a problem like repainting, since it eventually does display everything once the while loop exits. I want to GUI to update dynamically even when the while loop is running. Here is my code: while (some conditions) { //do some other stuff gui.displayPolygon(best); gui.displayFitness(fitness); gui.setVisible(true); } public void displayPolygon(Polygon poly) { BufferedImage bpoly = ImageProcessor.createImageFromPoly(poly); ImageProcessor.displayImage(bpoly, polyPanel); this.setVisible(true); } public static void displayImage(BufferedImage bimg, JPanel panel) { panel.removeAll(); panel.setBounds(0, 0, bimg.getWidth(), bimg.getHeight()); JImagePanel innerPanel = new JImagePanel(bimg, 25, 25); panel.add(innerPanel); innerPanel.setLocation(25, 25); innerPanel.setVisible(true); panel.setVisible(true); }

    Read the article

  • Manhattan Heuristic function for A-star (A*)

    - by Shawn Mclean
    I found this algorithm here. I have a problem, I cant seem to understand how to set up and pass my heuristic function. static public Path<TNode> AStar<TNode>(TNode start, TNode destination, Func<TNode, TNode, double> distance, Func<TNode, double> estimate) where TNode : IHasNeighbours<TNode> { var closed = new HashSet<TNode>(); var queue = new PriorityQueue<double, Path<TNode>>(); queue.Enqueue(0, new Path<TNode>(start)); while (!queue.IsEmpty) { var path = queue.Dequeue(); if (closed.Contains(path.LastStep)) continue; if (path.LastStep.Equals(destination)) return path; closed.Add(path.LastStep); foreach (TNode n in path.LastStep.Neighbours) { double d = distance(path.LastStep, n); var newPath = path.AddStep(n, d); queue.Enqueue(newPath.TotalCost + estimate(n), newPath); } } return null; } As you can see, it accepts 2 functions, a distance and a estimate function. Using the Manhattan Heuristic Distance function, I need to take 2 parameters. Do I need to modify his source and change it to accepting 2 parameters of TNode so I can pass a Manhattan estimate to it? This means the 4th param will look like this: Func<TNode, TNode, double> estimate) where TNode : IHasNeighbours<TNode> and change the estimate function to: queue.Enqueue(newPath.TotalCost + estimate(n, path.LastStep), newPath); My Manhattan function is: private float manhattanHeuristic(Vector3 newNode, Vector3 end) { return (Math.Abs(newNode.X - end.X) + Math.Abs(newNode.Y - end.Y)); }

    Read the article

  • Constraint to array dimension in C language

    - by Summer_More_More_Tea
    int KMP( const char *original, int o_len, const char *substring, int s_len ){ if( o_len < s_len ) return -1; int k = 0; int cur = 1; int fail[ s_len ]; fail[ k ] = -1; while( cur < s_len ){ k = cur - 1; do{ if( substring[ cur ] == substring[ k ] ){ fail[ cur ] = k; break; }else{ k = fail[ k ] + 1; } }while( k ); if( !k && ( substring[ cur ] != substring[ 0 ] ) ){ fail[ cur ] = -1; }else if( !k ){ fail[ cur ] = 0; } cur++; } k = 0; cur = 0; while( ( k < s_len ) && ( cur < o_len ) ){ if( original[ cur ] == substring[ k ] ){ cur++; k++; }else{ if( k == 0 ){ cur++; }else{ k = fail[ k - 1 ] + 1; } } } if( k == s_len ) return cur - k; else return -1; } This is a KMP algorithm I once coded. When I reviewed it this morning, I find it strange that an integer array is defined as int fail[ s_len ]. Does the specification requires dimesion of arrays compile-time constant? How can this code pass the compilation? By the way, my gcc version is 4.4.1. Thanks in advance!

    Read the article

  • Move camera to fit 3D scene

    - by Burre
    Hi there. I'm looking for an algorithm to fit a bounding box inside a viewport (in my case a DirectX scene). I know about algorithms for centering a bounding sphere in a orthographic camera but would need the same for a bounding box and a perspective camera. I have most of the data: I have the up-vector for the camera I have the center point of the bounding box I have the look-at vector (direction and distance) from the camera point to the box center I have projected the points on a plane perpendicular to the camera and retrieved the coefficients describing how much the max/min X and Y coords are within or outside the viewing plane. Problems I have: Center of the bounding box isn't necessarily in the center of the viewport (that is, it's bounding rectangle after projection). Since the field of view "skew" the projection (see http://en.wikipedia.org/wiki/File:Perspective-foreshortening.svg) I cannot simply use the coefficients as a scale factor to move the camera because it will overshoot/undershoot the desired camera position How do I find the camera position so that it fills the viewport as pixel perfect as possible (exception being if the aspect ratio is far from 1.0, it only needs to fill one of the screen axis)? I've tried some other things: Using a bounding sphere and Tangent to find a scale factor to move the camera. This doesn't work well, because, it doesn't take into account the perspective projection, and secondly spheres are bad bounding volumes for my use because I have a lot of flat and long geometries. Iterating calls to the function to get a smaller and smaller error in the camera position. This has worked somewhat, but I can sometimes run into weird edge cases where the camera position overshoots too much and the error factor increases. Also, when doing this I didn't recenter the model based on the position of the bounding rectangle. I couldn't find a solid, robust way to do that reliably. Help please!

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >