Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 21/1286 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Architecture for analysing search result impressions/clicks to improve future searches

    - by Hais
    We have a large database of items (10m+) stored in MySQL and intend to implement search on metadata on these items, taking advantage of something like Sphinx. The dataset will be changing slightly on a daily basis so Sphinx will be re-indexing daily. However we want the algorithm to self-learn and improve search results by analysing impression and click data so that we provide better results for our customers on that search term, and possibly other similar search terms too. I've been reading up on Hadoop and it seems like it has the potential to crunch all this data, although I'm still unsure how to approach it. Amazon has tutorials for compiling impression vs click data using MapReduce but I can't see how to get this data in a useable format. My idea is that when a search term comes in I query Sphinx to get all the matching items from the dataset, then query the analytics (compiled on an hourly basis or similar) so that we know the most popular items for that search term, then cache the final results using something like Memcached, Membase or similar. Am I along the right lines here?

    Read the article

  • No action responded to search

    - by gazza58
    i have defined a method called 'search' in my RecipesController which is not private. in routes.rb i have the following: map.connect 'recipes/search', :controller => :recipes, :action => :search i get the following error: No action responded to search. Actions: ... where my method 'search' does not appear in the actions list. if i change the method name from 'search' to 'searchthings' and the action in routes to 'searchthings' then this seems to work. what am i missing here?

    Read the article

  • How to "redefine search" or correct "misspelling" from the database

    - by From.ME.to.YOU
    Hello i want to add new feature to the search in my website. i'm using PHP and MYSQL. mysql database containing a table to the items that the user will search for, for each item there is a "keyword" column that's comma separated keywords "EXAMPLE: cat,dog,horse". after the user search in my website i want to get the words that are let me say "85%" similar to his search keyword, this is for redefine search. and for misspelling i want a service or something that provide if the keyword is correct or misspelled so i get some corrections and check if those exists in the database and then give those corrections to user to change his search keyword. i'm not asking for a solution here ... but if you can direct me in a one way or another that will be great Thanks guys Cheers

    Read the article

  • How to use semantic markup and Google Places to assist in local search SEO?

    - by ElHaix
    In this article, adding additional localized markup is supposed to help your site's SEO. ie. <div itemscope itemtype="http://data-vocabulary.org/Organization"> <span itemprop="name">Search Engine People</span> <span itemprop="address" itemscope itemtype="http://data-vocabulary.org/Address"> <span itemprop="street-address">100 Westney Road South Unit 200, Building E</span> <span itemprop="locality">Ajax</span>, <span itemprop="region">ON</span> <span itemprop="country-name">Canada</span> <span itemprop="postal-code">L1S 7H3</span> </div> What about a site that contains valid localized results, where the actual business location is not relevant. For example, a site with valid local results from San Francisco, CA and Phoenix, AZ. Should these tags be added to the localized results, and has anyone got any experience with how much adding these tags have improved results? In terms of Google Places, however, they seem to ask for the business' actual physical location. Is there a way to use Google Places in the aforementioned example to assist in SEO?

    Read the article

  • Thread Local Memory for Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ?

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • What dangers await if I block non-standard, non-major-usa search engine bots from my USA only website?

    - by Ryan
    I noticed tons of bandwidth being used by non-USA search engine bots, so I began blocking them in an effort to save bandwidth and cpu cycles for actual users and the search engines they come from (Google, Bing, Yahoo, Ask, etc.). Other than potentially losing some international traffic (which isn't really important to us since all of our content is very USA-centric), what additional dangers should I be concerned about? I'm using a modified version of Jeff Starr's User Agent Blocklist

    Read the article

  • Why does google does not ignore the word "languages", although I have set to ignore it in advanced search settings

    - by jitendra1234
    Why does google does not ignore the word "languages", although I have set to ignore it in advanced search settings. here is the term I am using in google search: -languages site:http://en.wikipedia.org/wiki/ and here is the first result where the word "languages" is still present, (you can do a quick crtl+F to find out) http://en.wikipedia.org/wiki/Walter_Bedell_Smith I am just curious to know why google have not ignored the word "languages"?

    Read the article

  • Why does the file lens not search mounted samba shares?

    - by Kaput1982
    I've got several samba shares mounted in my home directory (mounted with the "mount.cifs" command). I can browse the shares just fine. From Nautilus I can search them just fine. My question is why doesn't the dash file lens search the mounts as well? I've also noticed files I open from the shares do not show up as recently used (again in the file lens). I've Googled around and I've been unable to come up with any thing.

    Read the article

  • How to make Google show my site in search result like the following image? [closed]

    - by Samik Chattopadhyay
    Possible Duplicate: What are the most important things I need to do to encourage Google Sitelinks? Currently Google is displaying my site (http://layzend.info) like this in search result, only the link and meta description without any internal page links - But I want to be the search result like the following where the internal links are also displayed - How is it possible? Please help me to make my site more SEO friendly.

    Read the article

  • Does Googlebot (and/or search engines) index a forwarded page? [duplicate]

    - by user2889419
    This question already has an answer here: HTTP and HTTPS impacts on SEO 1 answer Let's say I have example.com domain, and I force the user to use the HTTPS over HTTP. The question is as browsers just accept and load the forwarded/new page (when the request for http://example.com - https://example.com), does the Googlebot (or other search engines) accept the forwarded page and index the new page and just ignore the old page? In other word, does search engines accept HTTPS beside the HTTP?

    Read the article

  • Sharepoint Search crawl not working

    - by Satish
    Search Crawling is error out on my MOSS 2007 installation. I get the following error for all the web apps I have following error in Crawl logs. http://mysites.devserver URL could not be resolved. The host may be unavailable, or the proxy settings are not configured correctly on the index server. The Application Event log also has the following corresponding error The start address http://mysites.devserver cannot be crawled. Context: Application 'SSPMain', Catalog 'Portal_Content' Details: The URL of the item could not be resolved. The repository might be unavailable, or the crawler proxy settings are not configured. To configure the crawler proxy settings, use the Proxy and Timeout page in search administration. (0x80041221) I'm using Windows 2008 server. I tried accessing the site using the above mentioned url and its available. I did the registry setting for loop back issue found here http://support.microsoft.com/kb/896861 still not luck. Any Ideas?

    Read the article

  • Sharepoint Search crawl not working

    - by Satish
    Search Crawling is error out on my MOSS 2007 installation. I get the following error for all the web apps I have following error in Crawl logs. http://mysites.devserver URL could not be resolved. The host may be unavailable, or the proxy settings are not configured correctly on the index server. The Application Event log also has the following corresponding error The start address http://mysites.devserver cannot be crawled. Context: Application 'SSPMain', Catalog 'Portal_Content' Details: The URL of the item could not be resolved. The repository might be unavailable, or the crawler proxy settings are not configured. To configure the crawler proxy settings, use the Proxy and Timeout page in search administration. (0x80041221) I'm using Windows 2008 server. I tried accessing the site using the above mentioned url and its available. I did the registry setting for loop back issue found here http://support.microsoft.com/kb/896861 still not luck. Any Ideas?

    Read the article

  • Removing bing from my google search bar when I open a new tab

    - by user329869
    Bing has rudely planted its self on my "open new tab" so above bing my regular google is there but below is the irritating bing search bar! I have tried everything I know of to get rid of it and it will not go! I can not find it listed under my settings, control panel and/or uninstall! This is the second time bing has made its unwelcome tush comfy in my google area! How frustrating! I miss my mac book pro! Removing bing from my google search bar when I open a new tab.

    Read the article

  • Search all files containing text

    - by enthdegree
    With Busybox, how do you search for an expression within a bunch of files recursively through a bunch of directories, but only look through text files? We don't know what the file's suffix is going to be; it could be .sh, it could be nothing, it could be something else. I was considering somehow basing the search on encoding although I am not quite sure what the encoding would be either. I've tried busybox grep -r but it searches through binary files too, which wastes a lot of time.

    Read the article

  • NHibernate.Search - async mode

    - by Atul
    Hi, I am using NHibernate Lucene search in my project. Lucene.Net.dll - v - 2.3.1.3 NHibernate.dll - v - 2.1.0.4000 At this point I am trying to use async option for indexing and used following options config.SetProperty(NHibernate.Search.Environment.WorkerExecution, "async"); config.SetProperty(NHibernate.Search.Environment.WorkerThreadPoolSize, "1"); config.SetProperty(NHibernate.Search.Environment.WorkerWorkQueueSize, "5000"); Questions 1) My initial index was not build with this option, when used these settings first time, I had error saying NHibernate.Search.dll not found. When I deleted existing index and then started working, it went fine. Do we need to rebuild indexes whenever we change config settings like above ? 2) How size of index should be interpreted; i.e. initially my index was about 400MB (build over the last few months), which I deleted. Later when I reindexed, the size of index went down to 5MB ! Search appear to be alright after limited testing, but such a change appeared bit scary. Should we delete/rebuild indexes once in a while & is it normal to change this drastically ? 3) Is my above setting is OK ? When I had WorkerThreadPoolSize=5, I once got Dr Watson kind of error. Please advise on best practices of using async configuration for search. Regards, Atul

    Read the article

  • Search multiple search engines with a single keyword at the same time in Chrome?

    - by cptloop
    I want to search multiple websites at once by using a keyword trigger in Google Chrome. I am trying to achieve this with Javascript as described in this topic over at mozillazine. This is the code that supposedly works in Firefox: javascript:void(window.open('http://www.google.com/search?q=%s'));void(window.open('http://www.altavista.com/web/results?q=%s')) I have tried to insert this code into the "URL with %s in place of query" but nothing happens when I invoke it. Is it possible to get this to work this way or another in Chrome?

    Read the article

  • Fuzzy Search on Material Descriptions including numerical sizes & general descriptions of material t

    - by Kyle
    We're looking to provide a fuzzy search on an electrical materials database (i.e. conduit, cable, etc.). The problem is that, because of a lack of consistency across all material types, we could not split sizes into separate fields from the text description because some materials are rated by things other than size. I've attempted a combination of a full text search & a SQL CLR implementation of the Levenshtein search algorithm (for assistance in ranking), but my results are a little funky (i.e. they are not sorting correctly due to improper ranking). For example, if the search term is "3/4" ABCD Conduit", I'll might get back several irrelevant results in the following order: 1/2" Conduit 1/4" X 3/4" Cable 1/4" Cable Ties 3/4" DFC Conduit Tees 3/4" ABCD Conduit 3/4" Conduit I believe I've nailed the problem down to the fact that these two search algorithms do not factor in the relevance of punctuation & numeric. That is, in such a search, I'd expect the size to take precedence over any fuzzy match on the rest of the description, but my results don't reflect that. My question is: Can anyone recommend better search algorithms or different approaches that may be better suited for searching a combination of alphanumerics & punctuation characters?

    Read the article

  • Modify database for the SharePoint 2010 Enterprise Search administration web site

    - by Mark Hall
    Does anyone know how to modify the database settings for the Enterprise Search administration web site? When you configure the service application via Central Administration, SharePoint just decides to use the default database server and gives a name like Enterprise_Search_DB_Identifier. I want to modify this to atleast give a name that makes scense like SharePoint_Search_AdministrationWebContent, and it might be nice to move it to the database server that is hosting the crawl and property database. I figured out how to move the Central Administration web content database, but this database is not listed as a content database. It is listed as a Microsoft.Office.Server.Search.Administration.SearchAdminDatabase. I have not tested to see if the same process would work but because you are doing a RemoveContentDatabase and NewContentDatabase, I would assume not. Any help would be appreciated.

    Read the article

  • Search and replace global modifier

    - by mrucci
    Is there any reason why non-global/first-occurrence substitution is the default in many text editing programs (vim, sed, perl, etc.)? I am talking about the /g flag of search and replace commands like: :s/pan/focaccia/g # in vim sed 's/sfortuna/fortuna/g' # with sed that will substitute every occurrence of the search pattern with the replacement string. After (not too) many years of vim and sed usage I still did not find any use case for non-global substitutions. Is there some valid historical reason? Or it is because it is? Thanks.

    Read the article

  • vim: highlighting a search term without moving the cursor

    - by ajwood
    Using Vim, I sometimes find myself staring at a section of source code for a while and suddenly want some variable on the screen to pop out. That's easy: /<var> which highlights them all. My issue is that it more often than not the search shifts my window so I'm not looking at the source code from the same place. Even if it's only shifted a few lines, it's still throws me off since I need to take a few seconds to figure out where things have moved to. Is it possible to highlight a search term without moving the cursor to the next match?

    Read the article

  • Change google search location beyond current country

    - by Abel
    I often work remotely and prefer using en-us locale settings for my searches. One of our computers is located in Germany, and whenever I try to change the location in Google Search Settings, I get an answer to "Pleas enter a valid Deutschland city or zip code". I checked this post, but it doesn't apply, as my language settings en-us and the Google search language is also set to English. It clearly uses the IP address of the computer. I couldn't find an answer in the Google help pages, anyone any idea how to change it London, New-York or Amsterdam?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >