Search Results

Search found 59194 results on 2368 pages for 'depth first search'.

Page 46/2368 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • rails search nested set (categories and sub categories)

    - by bob
    Hello, I am using the http://github.com/collectiveidea/awesome_nested_set awesome nested set plugin and currently, if I choose a sub category as my category_id for an item, I can not search by its parent. Category.parent Category.Child I choose Category.child as the category that my item is in. So now my item has category_id of 4 stored in it. If I go to a page in my rails application, lets say teh Category page and I am on the Category.parent's page, I want to show products that have category_id's of all the descendants as well. So ideally i want to have a find method that can take into account the descendants. You can get the descendants of a root by calling root.descendants (a built in plugin method). How would I go about making it so I can query a find that gets the descendants of a root instead of what its doing now which is binging up nothing unless the product had a specific category_id of the Category.parent. I hope I am being clear here. I either need to figure out a way to create a find method or named_scope that can query and return an array of objects that have id's corresponding tot he descendants of a root OR if I have any other options, what are they? I thought about creating a field in my products table like parent_id which can keep track of the parent so i can then create two named scopes one finding the parent stuff and one finding the child stuff and chaining them. I know I can create a named scope for each child and chain them together for multiple children but this seems a very tedious process and also, if you add more children, you would need to specify more named scopes.

    Read the article

  • shell scripting: search/replace & check file exist

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN -o`; isdone=`ls *.txt | grep $PATTERN -o`; whatsleft=todo - isdone; # what's the unix magic? #tack on the .xml prefix with sed or something #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep? To clarify: so the simplest way to do what i'm asking is (in pseudocode) for i in `/bin/ls *.xml` do replace xml suffix with txt if [that file exists] add to whatsleft list end done

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • average case running time of linear search algorithm

    - by Brahadeesh
    Hi all. I am trying to derive the average case running time for deterministic linear search algorithm. The algorithm searches an element x in an unsorted array A in the order A[1], A[2], A[3]...A[n]. It stops when it finds the element x or proceeds until it reaches the end of the array. I searched on wikipedia and the answer given was (n+1)/(k+1) where k is the number of times x is present in the array. I approached in another way and am getting a different answer. Can anyone please give me the correct proof and also let me know whats wrong with my method? E(T)= 1*P(1) + 2*P(2) + 3*P(3) ....+ n*P(n) where P(i) is the probability that the algorithm runs for 'i' time (i.e. compares 'i' elements). P(i)= (n-i)C(k-1) * (n-k)! / n! Here, (n-i)C(k-1) is (n-i) Choose (k-1). As the algorithm has reached the ith step, the rest of k-1 x's must be in the last n-i elements. Hence (n-i)C(k-i). (n-k)! is the total number of ways of arranging the rest non x numbers, and n! is the total number of ways of arranging the n elements in the array. I am not getting (n+1)/(k+1) on simplifying.

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • PHP: If no Results - Split the Searchrequest and Try to find Parts of the Search

    - by elmaso
    Hello, i want to split the searchrequest into parts, if there's nothing to find. example: "nelly furtado ft. jimmy jones" - no results - try to find with nelly, furtado, jimmy or jones.. i have an api url.. thats the difficult part.. i show you some of the actually snippets: $query = urlencode (strip_tags ($_GET[search])); and $found = '0'; if ($source == 'all') { if (!($res = @get_url ('http://api.example.com/?key=' . $API . '&phrase=' . $query . ' . '&sort=' . $sort))) { exit ('<error>Cannot get requested information.</error>'); ; } how can i put a else request in this snippet, like if nothing found take the first word, or the second word, is this possible? or maybe you can tell me were i can read stuff about this function? thank you!!

    Read the article

  • Binary Search Tree - Postorder logic

    - by daveb
    I am looking at implementing code to work out binary search tree. Before I do this I was wanting to verify my input data in postorder and preorder. I am having trouble working out what the following numbers would be in postorder and preorder I have the following numbers 4, 3, 14 ,8 ,1, 15, 9, 5, 13, 10, 2, 7, 6, 12, 11, that I am intending to put into an empty binary tree in that order. The order I arrived at for the numbers in POSTORDER is 2, 1, 6, 3, 7, 11, 12, 10, 9, 8, 13, 15, 14, 4. Have I got this right? I was wondering if anyone here would be able to kindly verify if the postorder sequence I came up with is indeed the correct sequence for my input i.e doing left subtree, right subtree and then root. The order I got for pre order (Visit root, do left subtree, do right subtree) is 4, 3, 1, 2, 5, 6, 14 , 8, 7, 9, 10, 12, 11, 15, 13. I can't be certain I got this right. Very grateful for any verification. Many Thanks

    Read the article

  • how do i filter my lucene search results?

    - by Andrew Bullock
    Say my requirement is "search for all users by name, who are over 18" If i were using SQL, i might write something like: Select * from [Users] Where ([firstname] like '%' + @searchTerm + '%' OR [lastname] like '%' + @searchTerm + '%') AND [age] >= 18 However, im having difficulty translating this into lucene.net. This is what i have so far: var parser = new MultiFieldQueryParser({ "firstname", "lastname"}, new StandardAnalyser()); var luceneQuery = parser.Parse(searchterm) var query = FullTextSession.CreateFullTextQuery(luceneQuery, typeof(User)); var results = query.List<User>(); How do i add in the "where age = 18" bit? I've heard about .SetFilter(), but this only accepts LuceneQueries, and not IQueries. If SetFilter is the right thing to use, how do I make the appropriate filter? If not, what do I use and how do i do it? Thanks! P.S. This is a vastly simplified version of what I'm trying to do for clarity, my WHERE clause is actually a lot more complicated than shown here. In reality i need to check if ids exist in subqueries and check a number of unindexed properties. Any solutions given need to support this. Thanks

    Read the article

  • Deletion procedure for a Binary Search Tree

    - by Metz
    Consider the deletion procedure on a BST, when the node to delete has two children. Let's say i always replace it with the node holding the minimum key in its right subtree. The question is: is this procedure commutative? That is, deleting x and then y has the same result than deleting first y and then x? I think the answer is no, but i can't find a counterexample, nor figure out any valid reasoning. EDIT: Maybe i've got to be clearer. Consider the transplant(node x, node y) procedure: it replace x with y (and its subtree). So, if i want to delete a node (say x) which has two children i replace it with the node holding the minimum key in its right subtree: y = minimum(x.right) transplant(y, y.right) // extracts the minimum (it doesn't have left child) y.right = x.right y.left = x.left transplant(x,y) The question was how to prove the procedure above is not commutative.

    Read the article

  • First languages with generic programming support

    - by oluies
    Which was the first language with generic programming support, and what was the first major staticly typed language (widely used) with generics support. Generics implement the concept of parameterized types to allow for multiple types. The term generic means "pertaining to or appropriate to large groups of classes." I have seen the following mentions of "first": First-order parametric polymorphism is now a standard element of statically typed programming languages. Starting with System F [20,42] and functional programming lan- guages, the constructs have found their way into mainstream languages such as Java and C#. In these languages, first-order parametric polymorphism is usually called generics. From "Generics of a Higher Kind", Adriaan Moors, Frank Piessens, and Martin Odersky Generic programming is a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by Ada in 1983 From Wikipedia Generic Programming

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Google Image Search Quick Fix

    - by Asian Angel
    Are you tired of unneeded webpage loading and extra link clicking just to access an image found using Google Image Search? Now you can jump directly to the image itself with the clickGOOGLEview extension for Google Chrome. The Problem When you find an image that you like using Google Image Search you always have to go through extra hassle just to get to the image itself. First you have an entire webpage loading in your browser and then you have to click through that irritating “See full size image” link. All that you need is the image, right? Problem Fixed Once you have installed the clickGOOGLEview extension you will absolutely love the result. Find an image that you like, click the link, and there is your new image without any of the hassle or extra link clicking. Big or small having direct access to the image is how it should have been from the beginning. Conclusion The clickGOOGLEview extension does one thing and does it extremely well…it gets you to those images without the extra hassle or additional link clicking. Links Download the clickGOOGLEview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysChange Internet Explorer in Windows Vista to Search Google by DefaultMake Firefox Built-In Search Box Use Google’s Experimental Search KeysQuick Tip: Show PageRank in Firefox while Google Toolbar is HiddenQuick Tip: Use Google Talk Sidebar in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher

    Read the article

  • MSSQL Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using MS SQL 2008 currently.

    Read the article

  • Skip the first RenderTarget when writing to MRT with Opaque blending

    - by cubrman
    I am writing to three rendertargets and whant to know how to tell a GPU not to write to the first RT. When you write a shader you can simply output less data than you have RTs (like output a single float4 when writing to three RTs) and only the first RTs will be affected, but you cannot specify to output this data anywhere else but to COLOR0, then 1, etc. Is there a way to write to several RTs but skip the first target? If I output zeroes, the data in the target will become zeroes, but I need it to remain untuched in the first target and only change in the specified ones. The reason I need this is to prevent data loss when calling SetRendertarget() with DiscardContents RTs. I write to all the RTs at one point and I need to write to only the specified ones afterwards. It must be the first texture as I have a depth buffer linked to it (XNA 4.0). Thanks.

    Read the article

  • First languages with generic programming support

    - by oluies
    Which was the first language with generic programming support, and what was the first major staticly typed language (widely used) with generics support. Generics implement the concept of parameterized types to allow for multiple types. The term generic means "pertaining to or appropriate to large groups of classes." I have seen the following mentions of "first": First-order parametric polymorphism is now a standard element of statically typed programming languages. Starting with System F [20,42] and functional programming lan- guages, the constructs have found their way into mainstream languages such as Java and C#. In these languages, first-order parametric polymorphism is usually called generics. From "Generics of a Higher Kind", Adriaan Moors, Frank Piessens, and Martin Odersky Generic programming is a style of computer programming in which algorithms are written in terms of to-be-specified-later types that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by Ada in 1983 From Wikipedia Generic Programming

    Read the article

  • Sorting data by relevance, from multiple tables

    - by Oden
    Hey, How is it possible to sort data from multiple tables by relevance? My table structure is following: I have 3 tables in my database, one table contains the name of solar systems, the second for e.g. of planets. There is one more table, witch is a connection between solar systems and planets. If I want to get data of a planet, witch is in the Milky Way, i post this data to the server, and it gives me a multi-dimensional array witch contains: The Milky Way, with every planet in it Every planet, witch name contains the string Milky Way (maybe thats a bat example because i don't think that theres but one planet with this name, but the main concept is on file) But, i want to set the most relevant restaurants to the top of the array. (for the relevance i would check the description of the restaurants or something like that) So, how would you do that kind of data sorting?

    Read the article

  • using heightmap to simulate 3d in an isometric 2d game

    - by VaTTeRGeR
    I saw a video of an 2.5d engine that used heightmaps to do zbuffering. Is this hard to do? I have more or less no idea of Opengl(lwjgl) and that stuff. I could imagine, that you compare each pixel and its depthmap to the depthmap of the already drawn background to determine if it gets drawn or not. Are there any tutorials on how to do this, is this a common problem? It would already be awesome if somebody knows the names of the Opengl commands so that i can go through some general tutorials on that. greets! Great 2.5d engine with the needed effect, pls go to the last 30 seconds Edit, just realised, that my question wasn't quite clear expressed: How can i tell Opengl to compare the existing depthbuffer with an grayscale texure, to determine if a pixel should get drawn or not?

    Read the article

  • Tracking first visit date in Google Analytics

    - by dusan
    I want track a site's visitor retention using Google Analytics, to see if unregistered users are returning to it, within a time frame of 2+ months from now. This blog post seems to be on the right track, but I want to track unregistered users, so I don't have a "join date" or similar variable at my disposal. This other blog post suggests using all 5 GA custom variables, using the first variable slot on the first week, variable 2 on week 2, etc. This method will allow me to track 5 weeks of visitors. I want to track more than 5 weeks of visitors, so I was thinking on using two custom variables in GA: visitor's first visit date, and visitor's last visit date. How I can save the first visit date? Because if I save another value in the same slot the old value will be overwritten, and I don't know how reliable is to save that variable conditionally (reading the __utmv cookie to check whether the "first visit date" is set, if it isn't set I save the current date)

    Read the article

  • CreatePatternBrush and screen color depth

    - by Carlos Alloatti
    I am creating a brush using CreatePatternBrush with a bitmap created with CreateBitmap. The bitmap is 1 pixel wide and 24 pixels tall, I have the RGB value for each pixel, so I create an array of rgbquads and pass that to CreateBitmap. This works fine when the screen color depth is 32bpp, since the bitmap I create is also 32bpp. When the screen color depth is not 32bpp, this fails, and I understand why it does, since I should be creating a compatible bitmap instead. It seems I should use CreateCompatibleBitmap instead, but how do I put the pixel data I have into that bitmap? I have also read about CreateDIBPatternBrushPt, CreateDIBitmap, CreateDIBSection, etc. I don´t understand what is a DIBSection, and find the subject generally confusing. I do understand that I need a bitmap with the same color depth as the screen, but how do I create it having only the 32bpp pixel data?

    Read the article

  • Recursive Binary Search Tree Insert

    - by Nick Sinklier
    So this is my first java program, but I've done c++ for a few years. I wrote what I think should work, but in fact it does not. So I had a stipulation of having to write a method for this call: tree.insertNode(value); where value is an int. I wanted to write it recursively, for obvious reasons, so I had to do a work around: public void insertNode(int key) { Node temp = new Node(key); if(root == null) root = temp; else insertNode(temp); } public void insertNode(Node temp) { if(root == null) root = temp; else if(temp.getKey() <= root.getKey()) insertNode(root.getLeft()); else insertNode(root.getRight()); } Thanks for any advice.

    Read the article

  • Ranking on the First Page in Search Engines

    How important is it to rank on the first page of the search result pages displayed by search engines when keywords are typed in and the enter key is pressed? This is a question that every website administrator must have asked himself at least a zillion times during the course of optimizing his website and continues to ask himself even though his website has been fully optimized and it has achieved a ranking on the search engines.

    Read the article

  • Refining Search Results [PHP/MySQL]

    - by Dae
    I'm creating a set of search panes that allow users to tweak their results set after submitting a query. We pull commonly occurring values in certain fields from the results and display them in order of their popularity - you've all seen this sort of thing on eBay. So, if a lot of rows in our results were created in 2009, we'll be able to click "2009" and see only rows created in that year. What in your opinion is the most efficient way of applying these filters? My working solution was to discard entries from the results that didn't match the extra arguments, like: while($row = mysql_fetch_assoc($query)) { foreach($_GET as $key => $val) { if($val !== $row[$key]) { continue 2; } } // Output... } This method should hopefully only query the database once in effect, as adding filters doesn't change the query - MySQL can cache and reuse one data set. On the downside it makes pagination a bit of a headache. The obvious alternative would be to build any additional criteria into the initial query, something like: $sql = "SELECT * FROM tbl MATCH (title, description) AGAINST ('$search_term')"; foreach($_GET as $key => $var) { $sql .= " AND ".$key." = ".$var; } Are there good reasons to do this instead? Or are there better options altogether? Maybe a temporary table? Any thoughts much appreciated!

    Read the article

  • Cakephp, Route old google search results to new home page

    - by ion
    Hi there, I have created a new website for a company and I would like all the previous search engine results to be redirected. Since there were quite a few pages and most of them where using an id I would like to use something generic instead of re-routing all the old pages. My first thought was to do that: Router::connect('/*', array('controller' => 'pages', 'action' => 'display', 'home')); And put that at the very end of the routes.php file [since it is prioritized] so that all requests not validating with previous route actions would return true with this one and redirect to homepage. However this does not work. I'm pasting my routes.php file [since it is small] hoping that someone could give me a hint: Router::connect('/', array('controller' => 'pages', 'action' => 'display', 'home')); Router::connect('/company/*', array('controller' => 'articles', 'action' => 'view')); Router::connect('/contact/*', array('controller' => 'contacts', 'action' => 'view')); Router::connect('/lang/*', array('controller' => 'p28n', 'action' => 'change')); Router::connect('/eng/*', array('controller' => 'p28n', 'action' => 'shuntRequest', 'lang' => 'eng')); Router::connect('/gre/*', array('controller' => 'p28n', 'action' => 'shuntRequest', 'lang' => 'gre')); Router::parseExtensions('xml');

    Read the article

  • Implement a server that receives and processes client request(cassandra as backend), Python or C++?

    - by Mickey Shine
    I am planning to build an inverted index searching system with cassandra as its storage backend. But I need some guidances to build a highly efficient searching daemon server. I know a web server written in Python called tornado, my questions are: Is Python a good choice for developing such kind of app? Is Nginx(or Sphinx) a good example that I can look inside to learn its architecture to implement a highly efficient server? Anything else I should learn to do this? Thank you~

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >