Search Results

Search found 81412 results on 3257 pages for 'file search'.

Page 9/3257 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • App not showing up in Google Play search on app name [on hold]

    - by William Jockusch
    About 30 hours ago, I released an app on Google Play. I am concerned that if you search on the exact app name, it does not show up in the results, even if you click "show more". https://play.google.com/store/search?q=free+graphing+calculator&c=apps It does show up if you put the name in quotes. But that's awfully low discoverability. https://play.google.com/store/search?q=%22free%20graphing%20calculator%22&c=apps Possibly relevant information: I had an earlier version with a different bundle ID. It was up for just an hour or so, and probably never actually visible to users. How can I fix this?

    Read the article

  • Developing a search algorithm

    - by Richart Bremer
    I want to create a basic search engine, and I want you to give me some ideas how to filter out the best results for my visitors. I have three fields regarding a product the user can search in: Title Category Description I came up with these ideas and I ask you to either competently criticize them or add to them. If the search term occurs in all three fields it should be among the first results. If it is in two of the fields it is below the results of 1. Combine the amount of occurences and output a value in per cent. For instance if in all fields together the term clock appeared 50 times and in all fields together there are 200 words, then the per cent value is 50/200*100 = 25%. Another product entry amounts to say 20% so product one having 25% is listed before product two having 20%.

    Read the article

  • How to Transform a user's search string into a MS SQL Full-Text Search Phrase

    - by Atomiton
    I've search for answers for this and I can't seem to find an answer to what should be somewhat simple. This is related to another question I asked, but it's different. What's the best way to take a user's search phrase and throw it into a CONTAINSTABLE(table, column, @phrase, topN ) phrase? Say, for example the user inputs: Books by "Dr. Seuss" What's the best way to turn that into something that will return results in my ContainsTAble() phrase? I was previously parsing the search phrase and writing something like ISABOUT("Books" WEIGHT(1.0), "by" WEIGHT(0.9), "Dr. Seuss" WEIGHT(0.8)) as my @phrase but ISABOUT seems to be returning odd results... especially when one word searches are entered. Any Ideas?

    Read the article

  • Convert a user's search string into a MS SQL `Full-Text Query` Search Phrase

    - by Atomiton
    I've search for answers for this and I can't seem to find an answer to what should be somewhat simple. This is related to another question I asked, but it's different. What's the best way to take a user's search phrase and throw it into a CONTAINSTABLE(table, column, @phrase, topN ) phrase? Say, for example the user inputs: Books by "Dr. Seuss" What's the best way to turn that into something that will return results in my ContainsTAble() phrase? I was previously parsing the search phrase and writing something like ISABOUT("Books" WEIGHT(1.0), "by" WEIGHT(0.9), "Dr. Seuss" WEIGHT(0.8)) as my @phrase but ISABOUT seems to be returning odd results... especially when one word searches are entered. Any Ideas?

    Read the article

  • PHP: How to use mysql fulltext search and handle fulltext search result

    - by garcon1986
    Hello, I have tried to use mysql fulltext search in my intranet. I wanted to use it to search in multiple tables, and get the independant results depending on tables in the result page. This is what i did for searching. $query = " SELECT * FROM testtable t1, testtable2 t2, testtable3 t3 WHERE match(t1.firstName, t1.lastName, t1.details) against(' ".$value."') or match(t2.others, t2.information, t2.details) against(' ".$value."') or match(t3.other, t2.info, t2.details) against(' ".$value."') "; $result = mysql_query($query)or die('query error'.mysql_error()); while($row = mysql_fetch_assoc($result)){ echo $row['firstName']; echo $row['lastName']; echo $row['details'].'<br />'; } Do you have any ideas about optimizing the query and format the output of search results?

    Read the article

  • Apache Solr: What is a good strategy for creating a tag/attribute based search for an image.

    - by Development 4.0
    I recently read an article about YayMicro that descries how they used solr to search their photos. I would like doing something similar (but on a smaller scale). I have figured out how to have solr to search text files, but I would like to learn what the best way to associate images with semi structured/unstructured text. Do I create an xml file with an image link in it? I basically want to input a search string and have it return a grid of images. Yay Micro Article Link

    Read the article

  • How to easily use Windows 7 search advanced options?

    - by Scott Evernden
    Is there an alternative to trying to remember all the advanced search options? Like an actual GUI as we had for windows XP? As powerful as Windows Search apparently is, I cannot possibly remember all the options available. How is a mere mortal like my Dad supposed to understand and retain all this? I get the shakes every time i need to find something on Win 7. Anyone have some relief? Part 2: Why does it RE-run a search if i add a column and try to sort on that?

    Read the article

  • Adding search for a private website

    - by Vitor Py
    I have a login-protected website. It's an internal application and it's not avaiable to the general public hence it's not indexed by any search engine. My application is developed on the Google App Engine. I would like to add a search engine but obviously without the need to public index it. There's any solution avaiable from Google/Bing/Others for a situation like this? Have you done this before? What solution did you chose and what are yours results?

    Read the article

  • Storing search result for paging and sorting

    - by Mattias
    I've been implementing MS Search Server 2010 and so far its really good. Im doing the search queries via their web service, but due to the inconsistent results, im thinking about caching the result instead. The site is a small intranet (500 employees), so it shouldnt be any problems, but im curious what approach you would take if it was a bigger site. I've googled abit, but havent really come over anything specific. So, a few questions: What other approaches are there? And why are they better? How much does it cost to store a dataview of 400-500 rows? What sizes are feasible? Other points you should take into consideration. Any input is welcome :)

    Read the article

  • Mysterious subdomains to my site indexed by Google

    - by shouren
    Stackers, We have an issue with strange subdomains pointing to (pages on) our site such as: www2.example.com 2.example.com anothersite.com.example.com A few things are perplexing: who created them? why they do that? why Google index them and made them appear in the search results when clicking them gets a 5xx error. how can we get rid of them? It seems some type of scams that hurt our site's free search and experience. Anyone had similar experience and knows the answers? Really appreciate it!

    Read the article

  • Combining search button on google map and search API

    - by cheesebunz
    Basically, i have the google api search engine which will send back addresses based on the search by the user; And a map api which will go over to the selected place typed in the textbox. However, both of them are in different textboxes / buttons. i can't seem to be able to change the ids of the button to make ONE of the buttons function as two ways, which your able to get the address, and the map will move to the selected location.

    Read the article

  • rails solr search limit total search results / get fixed number of results

    - by kLeos
    I'm trying to perform a search, order the results randomly, and only return a number of results, not all matches. Something like limit(2) I've tried using the Solr param 'rows' but that doesn't seem to do anything: @featured_articles = Article.search do with(:is_featured, true) order_by :random adjust_solr_params do |params| params[:rows] = 2 end end @featured_articles.total should be 2, but it returns more than 2 How can I get a randomized fixed number of results?

    Read the article

  • Is Google Analytics Part Of Google's Search Engine Algorithm

    - by ub3rst4r
    I was wondering if anyone knows if Google uses the data it receives from Google Analytics to help determine a websites SERP (Search Engine Rank Position). For example, if my website is getting 1000 users visiting my website from Canada and only 100 users visiting my website from the USA, does that mean my website will be ranked higher on Google.ca and lower on Google.com? And, if a website is using Google Analytics will it be ranked higher for the organic search engine keywords?

    Read the article

  • No search data in Google Analytics or Webmasters

    - by cjk
    I have a domain that has been registered in Google Webmasters and using Google Analytics for over 4 months. I get lots of analytics data, but am getting no information on Google searches in Webmasters, or Queries in Search Engine Optimisation in Analytics, even though I am getting keywords for traffic coming to my site from search engines. I have a test sub-domain with the same setup (except not HTTPS) that is getting some of this information through, even with much less data and visits. What could be wrong to stop me getting this information?

    Read the article

  • No search data in Goolge Analytics or Webmasters

    - by cjk
    I have a domain that has been registered in Google Webmasters and using Google Analytics for over 4 months. I get lots of analytics data, but am getting no information on Google searches in Webmasters, or Queries in Search Engine Optimisation in Analytics, even though I am getting keywords for traffic coming to my site from search engines. I have a test sub-domain with the same setup (except not HTTPS) that is getting some of this information through, even with much less data and visits. What could be wrong to stop me getting this information?

    Read the article

  • How do search engines segment against locale?

    - by Hope I Helped
    Assume I run a website with multiple language modes. If I had a Spanish section, it should be included in Spanish-segmented search engines such as Google Spain, Google Peru, Google El Salvador, etc. and excluded in the others. Likewise, even though the website would have content in Chinese, multilingual countries such as Singapore should feature content in their main language (English in this case). What is the best approach to ensure the appropriate language is associated with the various geographically segmented search engines?

    Read the article

  • Kickoff and Krunner to search with less than 3 chars in search field?

    - by Benjamin
    Both Krunner and Kickoff in Kubuntu return a list of found items after the user has entered at least three characters in the search field. Usage on Synapse shows that returning a list after one character onwards is faster and efficient. I would like Kickoff and Krunner to behave similarly, by making the return a list of items after entering the first character in their search field. How can I achieve that?

    Read the article

  • Include latest searches in search engines index

    - by drcelus
    My websites generally include a page with the (user input) latest searches. I know it's not a good security practice to allow this since you can find undesired content. On the other hand it boosts the number of pages indexed since every new search can provide a link on google and people can find you with related keywords that you are not using on your web page. What is the rationale behinf including or excludingthis results in search engines index ?

    Read the article

  • Program for remove exact duplicate files while caching search results

    - by John Thomas
    We need a Windows 7 program to remove/check the duplicates but our situation is somewhat different than the standard one for which there are enough programs. We have a fairly large static archive (collection) of photos spread on several disks. Let's call them Disk A..M. We have also some disks (let's call them Disk 1..9) which contain some duplicates which are to be found on disks A..M. We want to add to our collection new disks (N, O, P... aso.) which will contain the photos from disks 1..9 but, of course, we don't want to have any photos two (or more) times. Of course, theoretically, the task can be solved with a regular file duplicate remover but the time needed will be very big. Ideally, AFAIS now, the real solution would be a program which will scan the disks A..M, store the file sizes/hashes of the photos in an indexed database/file(s) and will check the new disks (1..9) against this database. However I have hard time to find such a program (if exists). Other things to note: we consider that the Disks A..M (the collection) doesn't have any duplicates on them the file names might be changed we aren't interested in approximated (fuzzy) comparison which can be found in some photo comparing programs. We hunt for exact duplicate files. we aren't afraid of command line. :-) we need to work on Win7/XP we prefer (of course) to be freeware TIA for any suggestions, John Th.

    Read the article

  • How to code a 'Next in Results' within search results in PHP

    - by thebluefox
    Right, bit of a head scratcher, although I've got a feeling there's an obvious answer and I'm just not seeing the wood for the trees. Baiscally, using Solr as a search engine for my site, bringing back 15 results per page. When you click on a result, you get a detail page, that has a "Next in Results" link on it, which obviously forwards you on to the next result. Whats the best way of doing this? I've come up with a few solutions but they're either too inpractical or just don't work. I could store all the ids in a session array, then grab the one after the current one and put that in the link. But with possibly hundreds/thousands of results, the memory that array would need, and the performance hit of dealing with it isn't practical. I could take the same approach and put it into the db, but I'll still have to deal with a potentially huge array when I grab them out of the db. Or; I could do the search again, only returning the id's, and grab the one after the one we're currently looking at. I think this could be the best option? Although it does seem kind of messy, namely because of when I have to select the id thats on a different 'page' (ie the 16th, 31st etc result). Unless I pass through where it was in the results, and select from there, but that still doesn't seem like the right way to do it. I'm really sorry if this is just complete nonsense, any help is massively appreciated as always, Cheers guys!

    Read the article

  • .Net search engine architecture and technology choice

    - by shrivb
    I am in the process of designing a search engine for an asp.net site. The site currently uses Microsoft Indexing Server to index and search content which range from simple text files to MS documents to PDFs. MIS is also used to crawl File servers. MIS in tandem with Index Server Companion crawls for content from external sites. I intend to replace MIS with the indexer/crawler I am trying to build. Since my platform is completely on the Microsoft stack, I cant afford to have a Java application server. Thus, Solr, and effectively, SolrNet is ruled out. With this being the context, I have couple of questions. 1.Technology choice I had done my initial investigation and looked at Lucene.Net. There seemed to be 2 issues in using Lucene.Net. First being, it cant crawl external content. There doesn't seem to be a direct port of Nutch in .Net. Second, since it is just an indexer, it cant parse various document types. The parsing is left to the developer. So, what would be best technology choice on the .Net platform to achieve indexing & crawling? Are there any .Net open source libraries available for document parsing? 2.Architectural pattern Is there any general architectural pattern or best practice that needs to be followed in designing such a search engine? Thanks in advance.

    Read the article

  • Search inside Xournal files (.xoj)

    - by Javad Sadeqzadeh
    I'm a big fan of Evernote, I use it regularly. However, it has a 60MB storage limit (although text files are not going to occupy much space, but the limitation concern still remains). Today, I installed Xournal, which has great features like annotating, nice background, free hand shapes and notes, save in PDF format, and many more. But the big problem is that as far as I've noticd, there is no intrinsic feature for seach inside the notes (created using Xournal with .xoj suffix). I used Catfish File Search application (which creates bash commands for full text search), but it couldn't help as well. Is there anyway to search inside a .xoj file at all? If so, it could be a suitable alternative to evernote, if you put your .xoj files on a cloud (which certainly offers you much more storage space than 60MB). If not, is there any other convenient app similar to Evernote, but with higher storage limit or without a limit? Somebody suggested Zim desktop wiki app, which looks great, but I'm nut sure if I could copy and paste everything there (a mixture of photos and tables and text with various formats and highlights), like what I do with Evernote. And a very useful tool I use is Evernote Web Clipper (browser extension). Of course, having a desktop client like Everpad is a plus, but not the absolute need. PS: I use pocket, so please do suggest that (it only preserve links (which might change over time) not the actual text). I also use google drive or docs, I don't like that for this purpose niether, it's too slow, doesn't have a browser extension and a desktop client. Thank you so much in advance.

    Read the article

  • How to search for a string everywhere (C: and D:) using Findstr?

    - by amiregelz
    I have a text (.txt) file located somewhere on my PC that contains a bunch of data, including the following string: Secret Username: ********* Secret Password: ********* How can I find this file from command-line, using Findstr? I don't know if it's on C: drive or D: drive. I tried various Findstr queries, such as: findstr /s /m /n /i Secret Username C: findstr /s /m /n /i Secret Username D: findstr /s /m /n /i /c:"Secret Username" findstr /s /m /n /r /i .*Secret Username.* but couldn't find the file.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >