Search Results

Search found 18361 results on 735 pages for 'hibernate search'.

Page 210/735 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Creating a Maze using Java

    - by user356184
    Im using Java to create a maze of specified "rows" and "columns" over each other to look like a grid. I plan to use a depth-first recursive method to "open the doors" between the rooms (the box created by the rows and columns). I need help writing a openDoor method that will break the link between rooms.

    Read the article

  • DFS Backtracking with java

    - by Cláudio Ribeiro
    I'm having problems with DFS backtracking in an adjacency matrix. Here's my code: (i added the test to the main in case someone wants to test it) public class Graph { private int numVertex; private int numEdges; private boolean[][] adj; public Graph(int numVertex, int numEdges) { this.numVertex = numVertex; this.numEdges = numEdges; this.adj = new boolean[numVertex][numVertex]; } public void addEdge(int start, int end){ adj[start-1][end-1] = true; adj[end-1][start-1] = true; } List<Integer> visited = new ArrayList<Integer>(); public Integer DFS(Graph G, int startVertex){ int i=0; if(pilha.isEmpty()) pilha.push(startVertex); for(i=1; i<G.numVertex; i++){ pilha.push(i); if(G.adj[i-1][startVertex-1] != false){ G.adj[i-1][startVertex-1] = false; G.adj[startVertex-1][i-1] = false; DFS(G,i); break; }else{ visited.add(pilha.pop()); } System.out.println("Stack: " + pilha); } return -1; } Stack<Integer> pilha = new Stack(); public static void main(String[] args) { Graph g = new Graph(6, 9); g.addEdge(1, 2); g.addEdge(1, 5); g.addEdge(2, 4); g.addEdge(2, 5); g.addEdge(2, 6); g.addEdge(3, 4); g.addEdge(3, 5); g.addEdge(4, 5); g.addEdge(6, 4); g.DFS(g, 1); } } I'm trying to solve the euler path problem. the program solves basic graphs but when it needs to backtrack, it just does not do it. I think the problem might be in the stack manipulations or in the recursive dfs call. I've tried a lot of things, but still can't seem to figure out why it does not backtrack. Can somebody help me ?

    Read the article

  • How would I create a VIM or Vi command to delete all text after a certain character for every line i

    - by Jason Down
    Scenario: I have a text file that has pipe (as in the "|" character) delimited data. Each field of data in the pipe delimited fields can be of variable length, so counting characters won't work (or using some sort of substring function... if that even exists in VIM). Is it possible, using VIM / Vi to delete all data from the second pipe to the end of the line for the entire file? There are approx 150,000 lines, so doing this manually would only be appealing to a masochist... e.g. Change the following lines from: 1111|random sized text 12345|more random data la la la|1111|abcde 2222|random sized text abcdefghijk|la la la la|2222|defgh 3333|random sized text|more random data|33333|ijklmnop to: 1111|random sized text 12345 2222|random sized text abcdefghijk 3333|random sized text I'm sure this can be done somehow... I hope. TIA UPDATE: I should have mentioned that I'm running this on Windows XP, so I don't have access to some of the mentioned *nix commands (CUT is not recognized on Windows).

    Read the article

  • Indexing Bilingual Sites

    - by Affar
    Hi all I have a site that supports two languages. The way the user changes the language is by clicking on a language link that will change his session language from A to B and return him to the same page. The problem that I am facing is that Google doesn't index the other language since it is seeing that the language link is the same as the page link but with changes in the parameters. So is there a way to inform google of the language, or should I add the language in the url, i.e. www.example.com/A/home?

    Read the article

  • Does array contain a specific value

    - by padatronic
    Hi chaps, I have an array of nsdictionaries and one key is ID (which is unique) I simply want to see if the the array of dictionaries contains a specific ID. Really simple but its friday and after statiny up watching the election i think my brain is melting. Any help would be great. Thanks

    Read the article

  • SEO: It's recommended to upload and put live a beta / non-finished version of web site??

    - by Jonathan
    I'm working on this big website and I want to put it online before its fully finished... I'm working locally and the database is getting really big so I wanted to upload the website and continue to work on it in the server, but allowing people to enter, so I can test. The question is if this is good for SEO, I mean, there are a lot of things SEO related that are incomplete.. For example: there are no friendly URLs, no sitemap, no .htacces file, lot of 'in-construction' sections... Does Google will penalize me forever? How does it works? Google indexes adn get the structure of the site just once or its constantly updating and checking for changes??? What i should do? What you recommend?!!?

    Read the article

  • Very fast document similarity

    - by peyton
    Hello, I am trying to determine document similarity between a single document and each of a large number of documents (n ~= 1 million) as quickly as possible. More specifically, the documents I'm comparing are e-mails; they are grouped (i.e., there are folders or tags) and I'd like to determine which group is most appropriate for a new e-mail. Fast performance is critical. My a priori assumption is that the cosine similarity between term vectors is appropriate for this application; please comment on whether this is a good measure to use or not! I have already taken into account the following possibilities for speeding up performance: Pre-normalize all the term vectors Calculate a term vector for each group (n ~= 10,000) rather than each e-mail (n ~= 1,000,000); this would probably be acceptable for my application, but if you can think of a reason not to do it, let me know! I have a few questions: If a new e-mail has a new term never before seen in any of the previous e-mails, does that mean I need to re-compute all of my term vectors? This seems expensive. Is there some clever way to only consider vectors which are likely to be close to the query document? Is there some way to be more frugal about the amount of memory I'm using for all these vectors? Thanks!

    Read the article

  • PHP mysql - ...AND column='anything'... ?

    - by Nike
    Is there any way to check if a column is "anything"? The reason is that i have a searchfunction that get's an ID from the URL, and then it passes it through the sql algorithm and shows the result. But if that URL "function" (?) isn't filled in, it just searches for: ...AND column=''... and that doesn't return any results at all. I've tried using a "%", but that doesn't do anything. Any ideas?

    Read the article

  • replacing a path with sed

    - by compie
    How can I use sed to replace this line char * path_list_[1] = { "/some/random/path" }; with this line char * path_list_[2] = { "lib/foo", "lib/bar" }; in a file named source.c Notes: * The path is really random. * Your solution should only change this line in source.c * I'm only interested in a sed oneliner. You can use this Python regex as a starting point: text = re.sub('static const char \* path_list_\[1\] = \{ "[^"]*" \};', 'static const char * path_list_[2] = { "lib/sun", "lib/matlab" };', text)

    Read the article

  • Vim: replacing start and end of a visual char, line or block

    - by gattu marrudu
    I am trying to find a shortcut to place a custom comment sequence on my code, e.g.: /* start of comment blah end of comment /**/ (it is easier to void the comment by just adding a / to the beginning) I would like to do this in Vim by selecting a visual line, block or char and adding '/' characters at the beginning of the block and '/*/' at the end, plus newlines. After selecting some lines (Shift-V) I tried this: '<,'>s/\(.*\)/\/*\r\1\r\/**\// But it adds the comment chars at EACH newline. How can I only apply the substitution at the beginning and end of the selected range? Thanks gm

    Read the article

  • Minimum number of training examples for Find-S/Candidate Elimination algorithms?

    - by Rich
    Consider the instance space consisting of integer points in the x, y plane, where 0 = x, y = 10, and the set of hypotheses consisting of rectangles (i.e. being of the form (a = x = b, c = y = d), where 0 = a, b, c, d = 10). What is the smallest number of training examples one needs to provide so that the Find-S algorithm perfectly learns a particular target concept (e.g. (2 = x = 4, 6 = y = 9))? When can we say that the target concept is exactly learned in the case of the Find-S algorithm, and what is the optimal query strategy? I'd also like to know the answer w.r.t Candidate Elimination. Thanks in advance.

    Read the article

  • Search and remove an element in mulit-dimentional array in php depending on a criteria

    - by Nadeem
    I've a simple question about multi-dim array, I want to remove any redundent element let's say in my case, [serviceMethod] => PL is coming 2 times, I want to search 'PL' with respect of [APIPriceTax] if an element has a lower price I want to keep it and remove the other one in array Array ( [0] => Array ( [carrierIDs] => 150 [serviceMethod] => CP [APIPriceTax] => 30.63 [APIPriceWithOutTax] 28.32 [APIServiceName] => Xpresspost USA [APIExpectedTransitDay] => 2 ) [1] => Array ( [carrierIDs] => 155 [serviceMethod] => PL [APIPriceTax] => 84.13 [APIPriceWithOutTax] => 73.8 [APIServiceName] => PurolatorExpressU.S. [APIExpectedTransitDay] => 1 ) [2] => Array ( [carrierIDs] => 164 [serviceMethod] => PL [APIPriceTax] => 25.48 [APIPriceWithOutTax] => 22.35 [APIServiceName] => PurolatorGroundU.S. [APIExpectedTransitDay] => 3 ) ) This is my pseudo code: Where $carrierAddedToList is the actual array $newCarrierAry = function($carrierAddedToList) { $newArray = array(); foreach($carrierAddedToList as $cV => $cK) { if( !in_array($cK['serviceMethod'],$newArray) ) { array_push($newArray, $cK['serviceMethod']); } } return $newArray; } ; print_r($newCarrierAry($carrierAddedToList));

    Read the article

  • finding long repeated substrings in a massive string

    - by Will
    I naively imagined that I could build a suffix trie where I keep a visit-count for each node, and then the deepest nodes with counts greater than one are the result set I'm looking for. I have a really really long string (hundreds of megabytes). I have about 1 GB of RAM. This is why building a suffix trie with counting data is too inefficient space-wise to work for me. To quote Wikipedia's Suffix tree: storing a string's suffix tree typically requires significantly more space than storing the string itself. The large amount of information in each edge and node makes the suffix tree very expensive, consuming about ten to twenty times the memory size of the source text in good implementations. The suffix array reduces this requirement to a factor of four, and researchers have continued to find smaller indexing structures. And that was wikipedia's comments on the tree, not trie. How can I find long repeated sequences in such a large amount of data, and in a reasonable amount of time (e.g. less than an hour on a modern desktop machine)? (Some wikipedia links to avoid people posting them as the 'answer': Algorithms on strings and especially Longest repeated substring problem ;-) )

    Read the article

  • Return an Object in Java

    - by digby12
    I've been struggling to work out how to return an object. I have the following array of objects. ArrayList<Object> favourites; I want to find an object in the array based on it's "description" property. public Item finditem(String description) { for (Object x : favourites) { if(description.equals(x.getDescription())) { return Object x; else { return null; Can someone please show me how I would write this code. Thanks.

    Read the article

  • Google GSA Stems for scandinavian languages

    - by HAXEN
    I have installed Scandinavia-2.1-1 language bundle to our GSA. After that I expected to find those languages available in Query Expansion, but nope nothing new there. Am I missing something? How are you other Scandinavians handling stems for your language?

    Read the article

  • search for the maximum

    - by peril brain
    I need to know a code that will automatically:- search a specific word in excel notes it row or column number (depends on data arrangement) searches numerical type values in the respective row or column with that numeric value(suppose a[7][0]or a[0][7]) it compares all other values of respective row or column(ie. a[i][0] or a[0][i]) sets that value to the highest value only if IT HAS GOT NO FORMULA FOR DERIVATION i know most of coding but at a few places i got myself stuck... i'm writing a part of my program upto which i know: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; using Microsoft.Office.Interop; using Excel = Microsoft.Office.Interop.Excel; Excel.Application oExcelApp; namespace a{ class b{ static void main(){ try { oExcelApp = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); ; if(oExcelApp.ActiveWorkbook != null) {Excel.Workbook xlwkbook = (Excel.Workbook)oExcelApp.ActiveWorkbook; Excel.Worksheet ws = (Excel.Worksheet)xlwkbook.ActiveSheet; Excel.Range rn; rn = ws.Cells.Find("maximum", Type.Missing, Excel.XlFindLookIn.xlValues, Excel.XlLookAt.xlPart,Excel.XlSearchOrder.xlByRows, Excel.XlSearchDirection.xlNext, false, Type.Missing, Type.Missing); }}} now ahead of this i only know tat i have to use cell.value2 ,cell.hasformula methods..... & no more idea can any one help me with this..

    Read the article

  • How to find kth minimal element in the union of two sorted arrays?

    - by Michael
    This is a homework question. They say it takes O(logN + logM) where N and M are the arrays lengths. Let's name the arrays a and b. Obviously we can ignore all a[i] and b[i] where i k. First let's compare a[k/2] and b[k/2]. Let b[k/2] a[k/2]. Therefore we can discard also all b[i], where i k/2. Now we have all a[i], where i < k and all b[i], where i < k/2 to find the answer. What is the next step?

    Read the article

  • python: find and replace numbers < 1 in text file

    - by hjp
    I'm pretty new to Python programming and would appreciate some help to a problem I have... Basically I have multiple text files which contain velocity values as such: 0.259515E+03 0.235095E+03 0.208262E+03 0.230223E+03 0.267333E+03 0.217889E+03 0.156233E+03 0.144876E+03 0.136187E+03 0.137865E+00 etc for many lines... What I need to do is convert all the values in the text file that are less than 1 (e.g. 0.137865E+00 above) to an arbitrary value of 0.100000E+01. While it seems pretty simple to replace specific values with the 'replace()' method and a while loop, how do you do this if you want to replace a range? thanks

    Read the article

  • Graph search problem with route restrictions

    - by Darcara
    I want to calculate the most profitable route and I think this is a type of traveling salesman problem. I have a set of nodes that I can visit and a function to calculate cost for traveling between nodes and points for reaching the nodes. The goal is to reach a fixed known score while minimizing the cost. This cost and rewards are not fixed and depend on the nodes visited before. The starting node is fixed. There are some restrictions on how nodes can be visited. Some simplified examples include: Node B can only be visited after A After node C has been visited, D or E can be visited. Visiting at least one is required, visiting both is permissible. Z can only be visited after at least 5 other nodes have been visited Once 50 nodes have been visited, the nodes A-M will no longer reward points Certain nodes can (and probably must) be visited multiple times Currently I can think of only two ways to solve this: a) Genetic Algorithms, with the fitness function calculating the cost/benefit of the generated route b) Dijkstra search through the graph, since the starting node is fixed, although the large number of nodes will probably make that not feasible memory wise. Are there any other ways to determine the best route through the graph? It doesn't need to be perfect, an approximated path is perfectly fine, as long as it's error acceptable. Would TSP-solvers be an option here?

    Read the article

  • Can php query the results from a previous query?

    - by eaolson
    In some languages (ColdFusion comes to mind), you can run a query on the result set from a previous query. Is it possible to do something like that in php (with MySQL as the database)? I sort of want to do: $rs1 = do_query( "SELECT * FROM animals WHERE type = 'fish'" ); $rs2 = do_query( "SELECT * FROM rs1 WHERE name = 'trout'" );

    Read the article

  • SQL Server 2005 FREETEXT() Perfomance Issue

    - by Zenon
    I have a query with about 6-7 joined tables and a FREETEXT() predicate on 6 columns of the base table in the where. Now, this query worked fine (in under 2 seconds) for the last year and practically remained unchanged (i tried old versions and the problem persists) So today, all of a sudden, the same query takes around 1-1.5 minutes. After checking the Execution Plan in SQL Server 2005, rebuilding the FULLTEXT Index of that table, reorganising the FULLTEXT index, creating the index from scratch, restarting the SQL Server Service, restarting the whole server I don't know what else to try. I temporarily switched the query to use LIKE instead until i figure this out (which takes about 6 seconds now). When I look at the query in the query performance analyser, when I compare the ´FREETEXT´query with the ´LIKE´ query, the former has 350 times as many reads (4921261 vs. 13943) and 20 times (38937 vs. 1938) the CPU usage of the latter. So it really is the ´FREETEXT´predicate that causes it to be so slow. Has anyone got any ideas on what the reason might be? Or further tests I could do?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >