Search Results

Search found 19563 results on 783 pages for 'binary search'.

Page 88/783 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Language redirect affecting pagerank and search listing?

    - by Janoszen
    Preface We have a number of sites that use the same redirect mechanism across the board. We recently transitioned one site from non-localised to localised and detected that the Google+ integration doesn't show up on the search results any more AND the PageRank is gone from 2 to 0. How the redirect works If the UA sends a cookie (e.g. lang=en), redirect the user to /language (e.g. /en) If the UA is a bot (.*bot.*), redirect to /en If the Accept-Language header contains a usable, non-English language, redirect to /language (English is the default on many browsers in non-English regions) If there is a valid GeoIP lookup and the detected region is linked to a supported language, redirect to /language Redirect to /en We do of course on all pages have the proper markup to indicate the alternate language: <link hreflang="de" href="/de" rel="alternate" /> As far as we can tell, we follow all publicly available guidelines from Google, so we are a bit at odds if this is a bug in Google or we have done something wrong. Question Does not having content on the root URL of a domain adversely affect search engine rankings and if yes, how does one implement a proper language redirection?

    Read the article

  • Azure Search Preview

    - by Greg Low
    One of the things I’ve been keeping an eye on for quite a while now is the development of the Azure Search system. While it’s not a full replacement for the full-text indexing service in SQL Server on-premises as yet, it’s a really, really good start. Liam Cavanagh, Pablo Castro and the team have done a great job bringing this to the preview stage and I suspect it could be quite popular. I was very impressed by how they incorporated quite a bit of feedback I gave them early on, and I’m sure that others involved would have felt the same. There are two tiers at present. One is a free tier and has shared resources; the other is currently $125/month and has reserved resources. I would like to see another tier between these two, much the same way that Azure websites work. If you have any feedback on this, now would be a good time to make it known. In the meantime, given there is a free tier, there’s no excuse to not get out and try it. You’ll find details of it here: http://azure.microsoft.com/en-us/documentation/services/search/ I’ll be posting more info about this service, and showing examples of it during the upcoming months.

    Read the article

  • Data indexing frameworks fit for large E-Commerce applications

    - by Dabu
    we wrote and still maintain a large E-Commerce application. Our feature list resembles what you would expect from most shops. We'd like to improve some of our features, and now the search/suggestion list functionality (enter some letters, a JScripted suggestion list appears) has caught our eye. Currently, we use http://xapian.org/. It has some drawbacks. Firstly, it's not actually the right solution. It has been created to index documents, not ever-changing data in a granularity that an E-Commerce application would need. Secondly, the load on the database is significant when we reindex all data every night. We'd like a framework that has been designed for indexing database data, which can add to the index easily and without much load, which can supply data changes in the backoffice quickly to the frontend without much load and delay. I'm aware of the fact that Xapian is Open Source and even Free Software, so we could adapt it to our needs if we decided to invest the time and manpower. But taking a quick look around for a solution more suited seems fair, right? Oh, and commercial applications are fine, too. FOSS is not required. Thanks a bunch.

    Read the article

  • Ubuntu 12.04 crash analysis - strange binary data on all open files at the moment of crash

    - by lanbo
    A couple of hours ago we got a system crash on Ubuntu 12.04. We checked all the log files and there is nothing suspicious to blame to. Last stuff that was logged was some dovecot activity. There are no kernel panic messages. Nothing. It is a new server (new hardware) we are testing before production. And because it is new hard, I'm suspicious the problem may be due to some faulty hardware. We already run memtester with no problem detected. I'll be happy to hear from other hardware testing tools (the machine has SSD). Anyway, the thing I wanted to ask you is a different one. The strange thing is on every open file at the moment of the crash we found the next sequence of symbols was written into them: "@^@^@^@^@^@^@...". For example, on the syslog log file we got: Apr 16 15:53:56 odyssey dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<info>, method=PLAIN, rip=46.29.255.73, lip=5.9.58.177 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ [these continues for about 1000 chars...] ^@^@^@^@Apr 16 15:55:12 odyssey kernel: imklog 5.8.6, log source = /proc/kmsg started. We got all these symbols in all open files. These include: syslog, mail.log, kern.log, ... But also on some logs that are output by php scripts run in CRONs from user accounts (not root). So, any idea why all open files got these characters written during the crash? This is pretty bad since the crash corrupted many files (we don't even know which other ones may be affected). We are suspicious that all open files (in write mode maybe) at the moment of the crash got all these symbols inserted. Why is that? BTW [in case it helps], the system automatically rebooted after the crash but Apache did not start. There were not traces in /var/apache2/*log why apache did not start. After running a "service apache2 start" it started with no problems. Also, we rebooted the machine manually and Apache also started on reboot. But it did not start after the crash and no errors were reported. Thanks guys!

    Read the article

  • Tackling thin content on an images gallery

    - by Ted Wilmont
    We run an images gallery as part of our site, however we have over 8,000 images and every image has a separate HTML page of its own to display the image caption, related image and comments from users of the site. This seems to be a problem especially with the Google Panda update because these pages are technically "thin content". What would be the best way to tackle this? We'd love some feedback and advice regarding this scenario. We have a few options we thought of already but can't decide: We could noindex the separate image pages and loose any image search listings we have for the image in favour of removing these thin pages from the index. We could 301 all of the individual image pages back to the image category listing and anchor each image (e.g. #img2122) and include all of the comments and description on the category listing page itself. If we was to simply list all of the images and content on the category pages themself; what's the best method? We could add all of the content in the anchor tags and use jQuery to display them in a box when a user clicks on the image or we could use Ajax to retrieve the information. However, what's the best Ajax method for SEO? Any ideas, suggestions, tips or advice is greatly appreciated and thank you in advance for any given.

    Read the article

  • Why does Google Chrome ignore "last_known_google_url" property in "Local State" file?

    - by Peter Sivák
    I want to force my Google Chrome web browser (version 21.0.1180.89, 64-bit) to use non-localized search (thus google in english) through address bar, using the default Google search engine. To achieve that, I have to change value of the property last_known_google_url to https://www.google.com/?hl=en& in Local State file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Local State). In that file, there should be the property: "browser": { "last_known_google_url": but it is not. Even if I add there the property, it has no impact on search - Google Chrome does not use the property and still searches in localized version. Another option is to put the property to Preferences file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Default/Preferences) - which works perfectly when I start Google Chrome and do some search - but just after that, the property (actually the whole Preferences file) is overriden, so "the most important" trailing part ?hl=en& of the property value is removed - and without it, the non-localized search does not work anymore. Why does Google Chrome ignore last_known_google_url property in Local State file?

    Read the article

  • What are the popular file indexing engines on Linux?

    - by netvope
    It would be nice if you can share your experience on the pros and cons of each of them. Personally I only know Google Desktop and Beagle, and I haven't really used them. I mainly store my files on Windows (and use its integrated indexed search) but I'm trying to see if I can switch over to Linux. Also, can any one of the search indexer run without X? Does any of them provide an API for search queries?

    Read the article

  • Multilingual website without language component in the URL

    - by user359650
    I'm working on a website for Canada which will have French and English versions. For SEO purposes, I would like to avoid using any language tag in URLs because I believe it will have more impact (e.g. example.ca/products better than en.example.ca/products or example.ca/en/products). I believe this is technically possible because the2 languages are sufficiently different that the URLs won't be conflicting with one another (e.g. if you want a "product" page, it will be /products in English, and /produits in French so you know which language the URL is about). Since Google (and most likely others) doesn't rely on the URL (nor HTML tags) to determine the content language I don't see any problems with search engines. To make this possible I've thought about using a cookie distinct from the session cookie (e.g. example.org_language) with long term expiry (e.g. N years) that will memorize the language chosen by the user. That way when people visit the website with a new browser session, they get served the proper language. I have already given up on users being able to switch one page from English to French: when people will chose English or French from the menu they will be redirected to the corresponding version of the home page. Do you foresee any problems with not using a language component in the URL (whether domain or path)? (as long as one makes sure URLS don't conflict).

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • PHP-based indexing and search implementation

    - by Chris
    Is there such thing? I designed a while back a rudimentary form based app for my users. We receive from our suppliers hardware manufacturing data in XML files: file name is made of eleven fields separated by tildes, with each field having its own meaning. R&D guys wanted to be able to search each field of the file names so I used regex() with decent results. Problem is that we have now in the upwards of 2.5 million files. And my app can't hack it anymore. I looked at Apache Lucene & Solr. Though it seemed like the best solution to my problem, the fields in the filenames are not peers to the file content. Big no-no with Solr. What is the best way to implement a PHP app with indexing and search capability with such large number of files? Do I have to buy Zend and use Zend_Search? Is it the only way? Thanks for your input.

    Read the article

  • Blogger homepage won't update!

    - by Sims Siniron
    i am new on blogging webmaster Tools. When i usually add new post to my blog, it will automatic updated my homepage also. But from last 14th January, my homepage won't update by Google SEPR. As a result i am losing my popularity on SEPR. Previously when i post new article, 70-80% will go to the first page result. But after the problem occurs, none of them reach in top 15page of Google SEPR :( Last 1/12/12, Google webmaster sent me a "Notice of DMCA removal from Google Search" massage to indicates one of my URL that contains some infringing content which i deleted after receiving their notice. Not only that, i also cheeked all of my posts if there any additional infringing content available. After removing that, i fill out Google's Content Removed Notification form to notify them and Google also sent me a feedback that they received it and suggest "In the future, if you have removed the allegedly infringing content from your site (and won’t put it back), please use the correct form" which also i filled up. Now my question is that, Is everything alright which i did before? Although my new posts are indexed in GSEPR with ".." but why Google Robots.txt won't update my homepage which previously automatically updated when a new article was published.

    Read the article

  • Sharepoint .PDF contents displaying as 'searchtext.xml' in searches

    - by Green Muffins
    Hi Experts, I recently used installed ifilter in my sharepoint farm to enable searching of the contents of .pdf documents. All went well, except if I search for contents of any .pdf file, they appear in the search results with document title "searchtext.xml", and the link to the document gives a giant page of the .pdf contents in an .xml looking browser page. :s I have added .pdf filetypes to the search, so I am unsure why it is reading them incorrectly.. if I search for a .pdf document title such as 'document.pdf' it will display the result as a html page, though the link does follow to a readable .pdf file. Any help?

    Read the article

  • Lotus Notes: Searching email by fields

    - by themel
    I'm using Lotus Notes 8.5.2 in a large corporate deployment. I'm trying to figure out how to search my email in a structured manner, e.g. by specifying criteria on fields. The help seems to suggest that I can use fields in square brackets and a list of operators, e.g. to find all mail where the From field contains John, I'd search for /[From] CONTAINS John However, I can't get this to work - any operator style query I've tried returns zero documents. "Web-style" queries (e.g. typing John into the search dialog) work, but I'd really prefer a way that would let me search more precisely. Potential issues: I'm assuming that the field names can be taken from the list of things I see when I open a mail and look at its Document Properties. Full text indexing is turned off for my mailbox, and all my attempts to create my own have failed. Does anyone have better information on searching by from/date/subject conditions in Notes?

    Read the article

  • Want to know how a particular page can be searched by Google?

    - by Champ
    I want to know how or what are the keywords by which a page can be searches on Google. Is there any tool on web by which I can get keywords for the page I want to search. Eg. If we search test on Google we will find this . Now what do i have to search(keywords) to find a particular page lets say abc.com/test.php Is there any tool by which i can get those keywords? Sorry if I am not clear with the question?

    Read the article

  • Is having a single `IndexWriter` instance in Lucene a good idea?

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea (make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Permission denied when trying to execute a binary burned to a CD-R

    - by user16654
    On an Ubuntu 9.10 (Karmic Koala) machine, I burned a CD from the command prompt using: cdrecord -v speed=16 dev=0,1,0 /FPS.iso The CD now contains an executable and some files. I tested the CD by loading it onto another machine (Red Hat 5.3) and when I try to run the program I get the following message: bash: ./FPS1_1: Permission denied I can open other files like text documents (the executable also comes with shared libraries). I realized I had burned the CD as root so I burned another one as another user but I still have the same problem. How can I remove this permission or what is the problem? P.S. the image was in / if that helps

    Read the article

  • Sharepoint managed Properties

    - by paulie
    Originally posted on StackOverflow, and edited for clarity I have a custom Content Type inside a list that has over 30 items (Which were uploaded via DockIt), and I have added several "managed properties" to the "crawled properties", in the SSP. All of them work except 1. The column "Synopsis" is a multiline field with no limit on it's length. It appears as a crawled property "Synopsis", and is mapped to a managed property 'asynop'. On the 'Advanced Search Page', it is added as a property and searchable, however it only returns a some matching records (if any). I manually created an entry, ran the crawl and was able to search for it. I edited an existing entry, ran the crawl (full and incremental), and it still only returned the manually entered entry. If I entered the search term in the Search box directly "asynop:fatigue", then all the correct results appear. Why is this happening? And could it please stop?

    Read the article

  • Email Discovery from Fairly Large Mailbox (15gig) Exchange 2003.

    - by nysingh
    I have a request from our legal team to search a users' mailbox. the mailbox is 15gig and it is on exchange 2003. I am trying to run windows desktop search and google desktop. I have gotten them to index mailbox but getting the results into a folder to backup on cd is getting bit difficult. Windows desktop search and google desktop search does not allow you to copy results to another folder. Can anyone point me to right direction? What is the best way to index and copy the results of pst, mailbox or edb file? What is the best discovery methods? Thanks

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • Transferring users and search engines to a new domain

    - by eftpotrm
    I've been asked to take over the maintnance of an existing site that's being reworked. At present it's serving localised content for several languages, but via a fairly unhelpful mechanism that means essentially search engines only have it indexed in English and any deep links will de facto appear in English as well. So, new localised sites are being built under separate domains - not just for this, there's other benefits. What we're then looking to do is to redirect users correctly to the new site, where appropriate. For humans this isn't a problem. We can send them through a gateway page on their first site visit, grab their language preference and put it in a cookie, then redirect them to the new localised content as soon as it's available. For search engines, this isn't so good... In principle I'm happy to simply bypass the gateway page and redirect known spiders to the new site, but this means we're serving radically different content (different URL even!) to human and robot users. Won't this therefore be regarded as cloaking and cause us grief? Anyone know a better way to handle this?

    Read the article

  • Binary serialization/de-serialization in C++ and C#

    - by 6pack kid
    Hello. I am working on a distributed application which has two components. One is written in standard C++ (not managed C++) and the other one is written in C#. Both are communicating via a message bus. I have a situation in which I need to pass objects from C++ to C# application and for this I need to serialize those objects in C++ and de-serialize them in C# (something like marshaling/un-marshaling in .NET). I need to perform this serialization in binary and not in XML (due to performance reasons). I have used Boost.Serialization to do this when both ends were implemented in C++ but now that I have a .NET application on one end, Boost.Serialization is not a viable solution. I am looking for a solution that allows me to perform (de)serialization across C++ and .NET boundary i.e., cross platform binary serialization. I know I can implement the (de)serialization code in a C++ dll and use P/Invoke in the .NET application, but I want to keep that as a last resort. Also, I want to know if I use some standard like gzip, will that be efficient? Are there any other alternatives to gzip? What are the pros/cons of them? Thanks

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >