Search Results

Search found 41930 results on 1678 pages for 'google product search'.

Page 269/1678 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Using Solr and Zends Lucene port together...

    - by thebluefox
    Afternoon chaps, After my adventures with Zend-Lucene-Search, and discovering it isn't all its cracked up to be when indexing large datasets, I've turned to Solr (thanks to Bill Karwin for that :) ) I've got Solr indexing the db far far quicker now, taking just over 8 minutes to index a table of just over 1.7million rows - which I'm very pleased with. However, when I come to try and search the index with the Zend port, I run into the following error; Fatal error: Uncaught exception 'Zend_Search_Lucene_Exception' with message 'Unsupported segments file format' in /var/www/Zend/Search/Lucene.php:407 Stack trace: #0 /var/www/Zend/Search/Lucene.php(555): Zend_Search_Lucene-_readSegmentsFile() #1 /var/www/z_search.php(12): Zend_Search_Lucene-__construct('tmp/feeds_index') #2 {main} thrown in /var/www/Zend/Search/Lucene.php on line 407 I've tried to have a search around but can't seem to find anything about this problem, everyone just seems to be able to get them to work? Any help as always much appreciated :) Thanks, Tom

    Read the article

  • User Productivity Kit - Powerful Packages (Part 2)

    - by [email protected]
    In my first post on packages I described what a package is and how it can be used. I also started explaining some of the considerations that should be taken into account when determining how to arrange your packages. The first is when the files are interrelated and depend on one another such as an HTML file and it's graphics. A second consideration is how the files are used in your outlines. Let's say you're using a dozen Word doc files. You could place them all in a single package or put each Word doc file in a separate package but what's the right thing to do? There are several factors that will influence your decision. To understand the first, let me explain a function of UPK publishing. Take an outline in UPK that has an attachment (concept, frame link, or hyperlink) that points to a file in a package. When you publish this outline, the publishing engine will determine that there is a link to a file in the package and copy the contents of the package to the publishing destination directory. This is done to ensure that any interrelated files are kept together. For the situation where you have an HTML file with links to number of graphics files, this is a good thing. If, however, the package has a dozen unrelated Word doc files and you link to only one of them, all dozen Word documents will be copied to the publishing destination directory.  Whether or not this is a good thing is dependent on two things. First, are all of the files in the package used in the outline that you're publishing? Take an outline that includes links to all of the Word documents in that dozen document package I described earlier. For this situation, you may choose to keep all the files in a single package for convenience. A second consideration is how your organization leverages reuse in UPK. In this context, I'm referring to the link style of reuse such as when you link to the same topic from multiple UPK outlines and changes to the topic appear in both places. Take an example where you have the earlier mentioned dozen Word document package and an outline with a dozen topics in it. Each topic has an attachment pointing to one of the Word documents in the package (frame link, concept, etc.) If you're only publishing this outline, the single package probably works fine but what if you're reusing one of these topics in another outline? As I explained earlier, linking to one file in the package will result in all files in the package being copied to your published output. In this example, linking to one topic in the first outline will result in all dozen Word documents being copied to the published output. This may result in files in the output that you don't want there for business or size reasons. This is a situation in which you should consider placing each of the Word documents in it's own separate package. With each document in it's own package, that link to a single document will result in only that single package and single Word document being copied to the published output. In my last post I had described that packages are documents in the UPK library. When using the multi-user version of the UPK Developer you can leverage standard library capabilities for managing the files in these packages during the development process - capabilities such as check in / check out, history, etc. When structuring your packages take into consideration how the authors are going to be adding, modifying and deleting files from the packages. A single package is a single document in the UPK library. Like any other document in the library, a single user can check out the package and edit it at a time. If you have a large number of files in a single package and these must be modified by many users, you need to consider whether this will cause problems as multiple users compete to update the same package. If the files don't depend on each other consider placing the files in separate packages to reduce contention. I hope you've enjoyed these two posts on how you can leverage the power of packages in your content. In summary, consider the following when structuring your packages: Is the asset a single, standalone file or a set of files that depend on each other? Will all the files always be used together in a single outline or may only some of the files be needed based on how the content is reused across multiple outlines? Will multiple developers need to update the files in a single package or should you break it into multiple packages to reduce contention when checking out the document? We'd like to hear from you on how you're using packages in your content. Please add your comments below! Thank you and I hope these two posts have given you additional insights into how to use packages in your content and structure them for efficient use. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • are keywords in URLs good SEO or needlessly redundant?

    - by Blazemonger
    A coworker and I are locked in a debate over the value of SEO keywords in the URL of a page. She wants to change all the filenames of the HTML pages of a fencing company so they look like residential-home-chicago.html, contact-chicago-contractor.html, and so on. She is convinced that because Google highlights keywords in URLS in search results, that means that putting keywords here is more valuable. My position is that these do not improve SEO, since Google doesn't seem to give keywords in the URL any more weight than keywords in the body of the page, and might even give them less weight. In the meantime, they make it harder for me to find the pages I want when its time to edit them, and the site as a whole looks cheap and spammy. Google's own SEO guide suggests to me that yes, keywords in URLs are useful, but not superior, and that they are more useful for human readability than search engine rankings. I'm looking for authoritative sources that support either position, not blog articles from SEO optimization companies trying to promote themselves.

    Read the article

  • Traffic fall after a server problem

    - by Sébastien
    I have a website from which I analyse the traffic with Google analytics. Day after day the traffic (mainly from Google SE) incresed until I get a problem with my server. For one day the server has been offline and after that I have no longer had as much users as I had before. Now it's like the site is no more referenced on Google index (but when I type "site:mysite.com", I still have all the results). Do you know if this is a normal behaviour and if the traffic will come back as before (the server has had problems two days ago) ?

    Read the article

  • How does bing-bot( is that the right spider-name? ) and googlebot interpret 301 redirect?

    - by jbcurtin
    I've been looking for documentation on how the Microsoft and Google bots interpret 301 redirects. It seems that google-bot stores documents on a url based index system. But I haven't been able to figure out how bing works. Should I assume that they are still working towards coping everyone else and assume they use an algorithm close to google? Is it best to just forward a page to a new location via Javascript? I think this might be a blackhat trick, but how would I tell the bots that it's not? Is 301 redirect my best option and I just have to bit the bullet because said pages are no longer in existence? What other options do I have that I might not be aware of?

    Read the article

  • Stop bots from crawling old links with extensions

    - by Jared
    I've recently switched to MVC3 which is extension-less for the URL's, but Google and Bing have a wealth of links that they are crawling which no longer exist. So I'm trying to find out if there is a way to format robots.txt (or by some other method) to tell google/bing that any link that ends in an extension isn't a valid link... Is this possible? On pages that I'm concerned about a User having saved as a fav I'm displaying a 404 page that lists the links to take once they are redirected to the new page (I decided to not just redirect them as I don't want to maintain these forever). For Google/Bing sake I do have the canonical tag in the header. User-agent: * Allow: / Disallow: /*.* EDIT: I just added the 3rd line (in text above) and it APPEARS to do what I'm wanting. Allow a path, but disallow a file. Can anyone confirm this?

    Read the article

  • my domain is well indexed just in my country

    - by ali
    This is my domain : http://yon.ir and it mainly should be shown in search results with the keyword "????? ????? ????" in country of Iran it is shown in google search results with rank 13 (with that keyword) which is logical but with IPs of other countries , it's not shown even in 10 first google result pages. (but the domain is indexed and when I search the whole domain title, it shows up my site at first) it's about 10 days with this manner and the domain is not new .(it was working with its previous owner before) So , what's the problem here?

    Read the article

  • Best S.E.O. practice for backlinking etc

    - by Aaron Lee
    I'm currently working on a website that I am really looking to optimise in terms of search engines, i've been submitting between 5-20 directory submissions daily, i've validated and optimised my code and i've joined a lot of forums etc to speak of the website in question, however, I don't seem to be making much of an impact in terms of Google. I know that S.E.O. takes a while to start making an impact, and that Google prefers sites that a regularly updated and aged, but are there any more practices that can really help with organic results in Search engines. I have looked on Google itself, and a few other SE's but nobody is willing to talk about extensive S.E.O. practices as they normally don't want people knowing their formula's for S.E.O., also does anyone know of a decent piece of software that really looks into the in's and out's of your page and provides feedback, I usually use http://www.woorank.com, but only using one program doesn't show if it's exactly correct in what it's saying. If anyone could help it would be much appreciated, thank you very much.

    Read the article

  • Is it possible to know impressions of other websites?

    - by Saeed Neamati
    Google Webmasters's dashboard gives you a big number which is called impressions, and by definition that I've seen in Google Analytics, it means the total number of times your site has been become eligible for SERPs. I just don't have an idea how to invest on this number, and how much its increase or decrease mean to me, because I can't compare it with other websites. I mean, if the impressions of say site a.com is 150,000, and mine is 50,000, then maybe I can confer that I need to triple my efforts to reach to a.com. But by seeing 50,000 alone I have no clue at all of how to interpret it. Is there any service or other way to know about the impressions Google gives to other sites?

    Read the article

  • Online accounts advanced setting with Empathy (13.10)

    - by uruloke
    the new online accounts doesn't have the advanced settings as the empathy accounts had. How do i change the google server to connect to? i read here: https://wiki.gnome.org/Empathy/FAQ I can't connect to my Google Talk account Your router is probably blocking DNS SRV requests. If possible you should try to fix it. If you can't, the easiest work around is to set "talk.google.com" in the "Server" field of the advanced section of the account. So i think this might fix my problem, or maybe just an option to shift the port it connects to. and is there anyone that knows how to use join any IRC channels with Empathy? i have installed the plugin, but i don't know how to join a channel.

    Read the article

  • migrating PR / rankings from one site to another

    - by sam
    Ive got a clients company site with decent PR, backlinks and search engines rankings. The client wants to change their comapany name and therfore URL, i will set up a rediect between the old site and the new site. But i was wandering is their a way to tell Google that they are moving while retaining all your rankings ? It is the same people, services, office building same everything just rebranded under a different name and url. Additionaly if their is a way to do this, how does google stop you buying expired domains and just pointing them onto your site, for instance i could buy several PR3 domains all relating to the same sector and point them at my site or would google catch on to this ?

    Read the article

  • Algorithms to find longest common prefix in a sliding window.

    - by nn
    Hi, I have written a Lempel Ziv compressor and decompressor. I am seeking to improve the time to search the dictionary for a phrase. I have considered K-M-P and Boyer-Moore, but I think an algorithm that adapts to changes in the dictionary would be faster. I've been reading that binary search trees (AVL or with splays) improve the performance of compression time considerably. What I fail to understand is how to bootstrap the binary search tree and insert/remove data. I'm not actually quite sure the significance of each node in the binary search. I am searching for phrases so will each character be considered a node? Also how and what is inserted/removed from the search tree as new data enters the dictionary and old data is removed? The binary search tree sounds like a good payoff since it can adapt to the dictionary, but I'm just not quite sure of how it's used.

    Read the article

  • Redirecting non existing post to homepage; is that good for SEO?

    - by BlackEagle
    I am checking my website out on Google Webmasters and I am seeing an astonishing 5000 links that could not be found by Google's Crawlers. That's normal, because my website is built in a manner that users can drop their own things, which also lead to 404 pages. Not a problem at all if I can find a workaround of course... So my question is: what if I made a function or a mod rewrite that will check if the link exists (a post for example) and if not, it will redirect it to the home page. Is this good for SEO? Will Google see this as 'link found'? How do I have to look at this problem?

    Read the article

  • How to check that I have recovered from Penguin 2.0?

    - by Simon Walker
    I have 3 year old website which has been hit by Penguin 2.0 in May. The website traffic dropped almost 30%. I have been working hard from last 2.5 months on the website and my website's traffic recovered in last week of August. In fact, I am receiving more traffic then ever. When I look at the stats, I find my website's search engine visibility has been improved. It is now appearing for more search queries. My website's impressions have also increased. What I am worried about is that my website is nowhere in top 5 pages for keywords having high competition and carrying the highest search volume. They are few in number but important. Should I consider my current situation as recovery or it's just the partial recovery? If it is only partial, then how come traffic is more then it was before penguin 2.0?

    Read the article

  • SEO blog Indexing: wordpress.com subdomain vs a registered domain?

    - by rumspringa00
    I've used WordPress for a few of my client's sites, mostly small businesses and eCommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using WordPress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot WordPress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domain would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong?

    Read the article

  • Webmaster tools showing 404 for non existent folder pages

    - by Jody
    Google webmaster tools is reporting some/many 404 urls that don't exist on my site. The links are things such as domain.com/xyz/ However that doesn't exist, but domain.com/xyz/index.html does exist. The "linked from" pages all show proper links to the "/xyz/index.html". The page without index.html DOES 404, but why is google even trying these urls if they are not linked to? My real question, is there a way to have google stop attempting to load these pages, and ultimately remove these from the crawl errors report. Thanks.

    Read the article

  • Recommendations for a network of student-related content

    - by Javier Marín
    I am running a network of websites with notes, homeworks, essays, etc. where users share their own content. I'm having real trouble with the latest Google updates (penguin, panda, etc) because the content is mainly poor-quality and with the same topic. For that reason, I want to create more websites and have more probabilites to appear in the SERPs. My question is: does Google analyzes related websites in order to exclude it from the results? I've think about distribute the websites around the world, in different hostings, but I'm afraid that Google would link it by their analytics, webmaster tools or adsense account, is that possible? What other recommendations do you have?

    Read the article

  • Empathy auto accept group chat invite

    - by Sivaji
    I'm using empathy 2.34.0 as chat client for account hosted on Google app (server talk.google.com). I'm happy with the features that empathy provides and integration with Google chat, however for group chat when the request is received I need to click on "join" button showed in popup to get started. This makes sense but I would to know if there is any way to automatically join the chat room without clicking the "join" button as I use it only with trusted uses. Besides the messages shared after the invite request and before my entry to chat room is not accessible to me. I looked around the empathy settings but couldn't find anything useful, wondering if I can get some help from here. Thanks.

    Read the article

  • What is the best way to deal with 404s that are all trying to point to the same page that are from an external site?

    - by Lee
    I started getting 404s showing up in my Google Webmaster's Tools from a site linking to a specific category but with odd characters at the end of the url. So Something like this: http://example.com/category/puppies%EF%BC%9A.textwidget%E8%A6%81%E7%B4%A0%E7%B7%A8%E9%9B%86 Google Webmaster says that there are about 120 of these links and I can imagine there will be more to come. What is the best way to handle these links from an seo point-of-view? I have heard 301 redirecting too many links at one time can cause Google to ding the site but I don't want this site to continue posting broken links. Any help on this would be appreciated.

    Read the article

  • Will a rel=canonical link pointing to a 301 redirect pass less pagerank than one without a 301?

    - by tobek
    On this official Google page about canonical links it says: Can rel="canonical" be a redirect? Yes, you can specify a URL that redirects as a canonical URL. Google will then process the redirect as usual and try to index it. There is no mention that this might dilute the impact of the canonical link. However, Google has made clear elsewhere that 301 redirects do dilute PageRank - roughly as much as a link dilutes PageRank. Is that relevant here? I'm assuming the answer is "no" but I wanted to confirm. Relevant but not duplicate: Does Rel=Canonical Pass PR from Links or Just Fix Dup Content.

    Read the article

  • Do wordpress websites get indexed quicker by SE than a regular website?

    - by guisasso
    I registered a couple of domains with the names of categories of products we sell. I then installed wordpress in one of those domains and played around with it for a bit, and left it alone for about a month. There was a link on my regular website to that secondary website and that website was also registered in google's webmaster tools, but that's that. I then searched on google last week for that product category, and to my surprise, that secondary website showed up in the 2nd or 3rd page on google. Now my question is: Do search engines index wordpress websites quicker? I had given up on using wordpress for that website, since it's so simple, but should i use it, would it give me better results? Thanks in advance for the help, if the question is not deleted.

    Read the article

  • What is the right approach to use adsense with responsive web design?

    - by Sisir
    Recently I was studying responsive design a lot and designed couple of sites. But i was wondering how would I use google adsense (which is pixel based) ads on my responsive design? Very typical example is suppose I have a 728x90 ads on header. Or if i do a mobile first approach i would need different versions of ad sizes for different view posts but google doesn't allow more than three ad unit per page (as far as i know). So, Question: What is the right approach/best practice of using google adsense on a responsive site design?

    Read the article

  • moving blogs content to new website [duplicate]

    - by arash
    This question already has an answer here: How do I rename a domain and preserve PageRank? 3 answers I have a blog and I wanna move its contents to a new website that i have made recently ! my question is that my blogs pagerank is 2 in google and i wanna know if i move my blog's contents to the new website and then delete my blog, will my new website get the blog's pagerank in google after afew months? or not? considering the blog had got the pagerank 2 in google with these contents? thanks a bunch ;)

    Read the article

  • duplicate pages

    - by Mert
    I did a small coding mistake and google indexed my site wrongly. this is correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE but google index my site like this : https://www.foo.com/urunler/171/cart.aspx first I fixed the problem and made a site map and only correct link in it. now I checked webmaster tools and I see this; Total indexed 513 Not selected 544 Blocked by robots 0 so I think this can be caused by double indexes and they looks not selected makes my data not selected. I want to know how to fix this "https://www.foo.com/urunler/171/cart.aspx" links. should I fix in code or should I connect to google to reindex my site. If I should redirect wrong/duplicate links to correct ones, what the way should be? thanks for your time in advance.

    Read the article

  • Le Marketplace professionnel des Google Apps a un an, sa croissance sur les 12 premiers mois a été de 600 %

    Le Marketplace des Google Apps a un an Sa croissance sur les 12 premiers mois est de 600 % Il y a un an Google lançait sa deuxième galerie d'applications. La première était destinée à Android (l'Android Market), la deuxième à ses Google Apps. Le Google Apps Marketplace possède aujourd'hui plus de 300 applications professionnelles. Le but de cette galerie et de ces applications est de compléter l'offre initiale de Google pour les entreprises. Les Google Apps s'appuient en effet sur les Google Docs (traitement de texte en ligne, tableur, etc.), sur Gmail, sur un agenda partagé, ainsi que sur un ensemble de services en constante évolution (éditeur de blogue, de...

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >