Search Results

Search found 14789 results on 592 pages for 'pro backup'.

Page 243/592 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • How to hide download file from bots? [closed]

    - by CJ7
    Possible Duplicate: How to restrict the download of all files in a folder? I want to make a private file available for download but not use username/password protection. I want to put the file into a directory called something like download. How can I ensure: the file does not become part of search engine results, and the file cannot be accessed by bots that might guess the directory name?

    Read the article

  • SEO blog Indexing: wordpress.com subdomain vs a registered domain?

    - by rumspringa00
    I've used WordPress for a few of my client's sites, mostly small businesses and eCommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using WordPress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot WordPress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domain would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong?

    Read the article

  • Does purposely linking to an invalid URL and then using 301 affect SEO?

    - by Mike
    On a section of my site, I am currently using .htaccess rewrites to put the ID as part of the URL instead of in the query, like so: RewriteRule ^([a-z_]+)?/?tours/([0-9]+)/(.*) /tours/tour_text.php?lang=$1&id=$2&urlstr=$3 [L] For example, if someone goes to /en/tours/12/some-text-here it will rewrite it to /tours/tour_text.php?lang=en&id=12&urlstr=some-text-here. However I don't want the users to be able to put just any text, so if they type in the wrong some-text-here part it will 301 redirect them to the right page. This works perfectly, but I can see a potential problem potential arising when localizing the website, so I just wanted to make sure it's not actually a problem. How it is now, if someone goes to /en/tours/12/some-text-here, the anchor to the Spanish version of that page will be /es/tours/12/some-text-here (i.e. only changing the "en" to "es"), and then the script will then 301 them to the correct Spanish text (something like /es/tours/12/algun-texto-aqui). And the reverse will also be the same. The anchor on the Spanish version to the English version would be /en/tours/12/algun-texto-aqui and then they will be forwarded with 301 back to /en/tours/12/some-text-here. Basically, the anchor changes the language and the 301 changes the string at the end. So I have two questions: Does purposely and permanently having invalid URLs on your site that get 301'ed to the correct ones have any effect on SEO? I could make it just show the correct URL to begin with, but this is a significant amount of work due to how I am handling the translations, so I would prefer just to 301 them. Will the invalid URLs that are contained in the links be added to the search engine indexes even if they get 301'ed to another page?

    Read the article

  • Anonymouse VS Logged in users on my site & Google Analytics

    - by Flowpoke
    I'd like to be able to run two different 'tracks' for Google Analytics; One for anonymous users of the site and another for Users whom are logged-in. I say "track" because Im not sure of the term--but I definitely know I want it to all be in the same "Analytics Account", I just want to segregate my logged-in users... In the site template, I can very easily add a conditional to display one or the other (Analytics code snippet)... Which Im hoping this comes down to and although Im not sure, it seems that the last digit in your Analytics ID (e.g. UA-15XXXX0-X) could be incremented to gain such additional 'tracks'....? Any tips? Am I doin it wrong? My current footer snippet: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-XXXXXXX-1"); pageTracker._trackPageview(); } catch(err) {} </script>

    Read the article

  • Fake links cause crawl error in Google Webmaster Tools

    - by Itai
    Google reported Crawl Errors last week on my largest site though Webmaster Tools. Here is the message: Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages. The Crawl Errors list is now full of hundreds of fake links like these causing 16,519 errors so far: Note that my site does not even have a search.html and is not related to any of the terms shown in the above image. Inspecting sources for one of those links, I can see this is not simply an isolated source but a concerted effort: Each of the links has a few to a dozen sources all from different, seemingly unrelated sites. It is completely baffling as to why would someone to spending effort doing this. What are they hoping to achieve? Is this an attack? Most importantly: Does this have a negative effect on my side? Could it negatively impact my ranking? If so, what to do about it? The few linking pages I looked at are full of thousands of links to tons of sites and have no contact information and do not seem like the kind of people who would simply stop if asked nicely! According to Google Webmaster Tools, these errors have appeared in a span of 11 days. No crawl errors were being reported previously.

    Read the article

  • How to discredit hacked links pointing at my company's website

    - by Dan Gayle
    The competition of one of my company's websites has started a really dirty campaign of acquiring hack links. One of their ingenious tactics has been to seed in links to OUR site withing their hack bot, making US look like we might be responsible for it or using us to cover their tail. These are .gov and .edu sites. Is there any way possible to discredit these links? To disavow them at all? EDIT: Penguin has really effected this question, IMO. Does anyone know if there is a revised opinion on disavowing backlinks to your site?

    Read the article

  • Broken links in content reports when tracking subdomains with Google Analytics

    - by Rob Sobers
    I have a tracking code that I use on my main site and my blog, which is on a subdomain: www.example.com blog.example.com I have a single profile in Google Analytics. I use advanced segments to look at traffic to the main site vs. traffic to the blog. Problem 1: When I'm browsing my content reports under Standard Reporting, the "Page" column doesn't show the top-level or sub-domain, so I can't differentiate www.example.com/index.html from blog.example.com/index.html easily. According to the docs, this filter is supposed to make GA prepend the hostname to the page URL in your content reports, but it doesn't seem to work. Problem 2: When I click on the little "Open in new window" icon next to a given page in a content report line, it always assumes the page lives on www.example.com, so I get 404s when the page is actually on blog.example.com. Is there a good solution for these subdomain tracking problems?

    Read the article

  • Diagnosing Bootstrap 3 Glyphicon Button Icons Not Showing

    - by Paulb
    I have a glyphicons in Bootstrap 3. They work very nicely here: latest Chrome latest Firefox latest Safari latest Explorer latest Android At one facility, the glyphicons don't show. The buttons come up blank. How do I troubleshoot? They are security sensitive there. I don't have systems or network access.. and am not in a position to request that. Troubleshooting with advanced tools isn't going to happen. Here's what I have access to: Internet Explorer 9 Behind a very secure firewall Sometimes, I think the glyphs not showing is the IE 9.. but my code should be addressing that. Sometimes, I think their firewall is blocking the CDN. Can I enter a URL into a browser to test if the CDN is there? Sometimes, I think my FB share and like buttons upset this facilty's firewall, and they tie the whole thing down. Any suggestions at how I begin to research this? Or maybe you have an outright idea for IE 9 and glyphs (though my code is very-very close to the demo's which work).

    Read the article

  • WUXGA revisited

    - by John Paul Cook
    I previously blogged about my search for a 17” 1920x1200 laptop. The only one I could find was a 17” MacBook Pro, which has been an excellent machine for running Windows and SQL Server. It is no longer made. Apple has a few refurbished ones available. Just be sure to get a matte display if you buy one. If you want WUXGA resolution or better in a laptop, your only off the shelf option is now the 15” MacBook Pro with the Retina display, which is 2880x1800. This exceeds the resolution of my 30” 2560x1600...(read more)

    Read the article

  • Redirects in .htaccess to avoid crawl errors

    - by user71698
    I am getting a lot of errors in Webmaster Tools and basically there's a lot of links ending like this: mydomainname.com/links.php How can I redirect these links, to shave off this part at the end? For instance, there is a link in Google: http://www.onlineglobalbiz.com/article-marketing/www.onlineglobalbiz.com/links.php This should be: http://www.onlineglobalbiz.com/article-marketing/ Using .htaccess, how can I redirect from the incorrect links?

    Read the article

  • Google search results are invalid

    - by Rufus
    I'm writing a program that lets a user perform a Google search. When the result comes back, all of the links in the search results are links not to other sites but to Google, and if the user clicks on one, the page is fetched not from the other site but from Google. Can anyone explain how to fix this problem? My Google URL consists of this: http://google.com/search?q=gargle But this is what I get back when the user clicks on the Wikipedia search result, which was http://www.google.com/url?q=http://en.wikipedia.org/wiki/Gargling&sa=U&ei=_4vkT5y555Wh6gGBeOzECg&ved=0CBMQejAe&usg=AFQjeNHd1eRV8Xef3LGeH6AvGxt-AF-Yjw <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html lang="en" dir="ltr" class="client-nojs" xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Gargling - Wikipedia, the free encyclopedia</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Style-Type" content="text/css" /> <meta name="generator" content="MediaWiki 1.20wmf5" /> <meta http-equiv="last-modified" content="Fri, 09 Mar 2012 12:34:19 +0000" /> <meta name="last-modified-timestamp" content="1331296459" /> <meta name="last-modified-range" content="0" /> <link rel="alternate" type="application/x-wiki" title="Edit this page" > <link rel="edit" title="Edit this page" > <link rel="apple-touch-icon" > <link rel="shortcut icon" > <link rel="search" type="application/opensearchdescription+xml" > <link rel="EditURI" type="application/rsd+xml" > <link rel="copyright" > <link rel="alternate" type="application/atom+xml" title="Wikipedia Atom feed" > <link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&amp;lang=en&amp;modules=ext.gadget.teahouse%7Cext.wikihiero%7Cmediawiki.legacy.commonPrint%2Cshared%7Cskins.vector&amp;only=styles&amp;skin=vector&amp;*" type="text/css" media="all" /> <style type="text/css" media="all">#mwe-lastmodified { display: none; }</style><meta name="ResourceLoaderDynamicStyles" content="" /> <link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&amp;lang=en&amp;modules=site&amp;only=styles&amp;skin=vector&amp;*" type="text/css" media="all" /> <style type="text/css" media="all">a:lang(ar),a:lang(ckb),a:lang(fa),a:lang(kk-arab),a:lang(mzn),a:lang(ps),a:lang(ur){text-decoration:none} /* cache key: enwiki:resourceloader:filter:minify-css:7:d5a1bf6cbd05fc6cc2705e47f52062dc */</style>

    Read the article

  • How to configure Google Analytics experiments manually

    - by John
    I wish to run multivariate tests on an e-commerce site that run across all product pages. I will be setting and deciding the variations myself all I need to do is track the results in GA. I think may be possible (although only A/B testing is available via the GA UI): https://developers.google.com/analytics/devguides/platform/features/experiments#serving-framework EXTERNAL – You will choose variations, handle experiment optimization, and only report the chosen variation to Google Analytics. For example, this should be used by 3rd-party optimization platforms that want to integrate with Google Analytics for reporting purposes. In this case, the Google Analytics statistical engine will not run. However how do I configure this and push the data to GA in my page?

    Read the article

  • change the subdomain and keep the rest of the url

    - by MohamedKadri
    Hello, I'm working on a multilingual website, and I want to generate the links in this way: The site is domain.tld and defaults to English, It has some subdomains like fr.domain.tld which will be in French, de.domain.tld which will be in German, it.domain.tld which will be in Italian... when the current page is the index, the links to other languages will be like this: domain.tld, fr.domain.tld, de.domain.tld, it.domain.tld.... But when we are in another page like domain.tld/my-page, how do we generate the URLs to match the current page but with another subdomain/language using PHP

    Read the article

  • Jquery Carousel - Need A Script Hack to Customize

    - by Leah
    I hope someone can help me out here. I am coding a site for a client using the carouFredsel. I am using this because it allows for variable widths with in the slideshow as seen here: http://2938.sandbox.i3dthemes.net/index-old.html. My problems are as follows: to use the built in auto center script the scrolling changes the white space in between images during the transition to fit the width of the wrapper. I need a hack to keep the white space during transition the same, like this: http://2938.sandbox.i3dthemes.net/index.html. Also, I can't figure out how to put this snippet into my code and make it work scroll: { onAfter: function() { if ( $(this).triggerHandler( "currentPosition" ) == 0 ) { $(this).trigger( "pause" ); } } }

    Read the article

  • Using AdSense to show ads to logged-in users

    - by John
    I know that you can grant authorization permissions to Google AdSense so that it can 'log in' and see what other logged in users can see (e.g. in a private forum), so that the ads it displays are better targetted. Extending this principle further: I am making a site which will show completely different content for each individual user (i.e. not 'common' content like a forum in which everybody sees essentially the same thing). You could think of this content as similar to the way each Facebook user has a different news feed, but it is the 'same' page. Complicating things further, the URLs for this site will be simple, e.g. '/home' and '/somepage', and will not usually include unique identifiers to differentiate between users (e.g. '/home?user=32i42'). My questions are: Is creating an account purely for AdSense to log in to the site with worth it in this case, seeing as it will be seeing it's own 'personalized' version and not any other user's? More importantly: is that against the Google AdSense Terms of Service? (I can't seem to figure that one out) How would you go about this problem?

    Read the article

  • Redirecting none existing pages to the main page? [closed]

    - by Asaf
    Possible Duplicate: SEO: ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found? Hello, I have an online shop, now I got some products that I want to delete, however I am aware that some of them are indexed and/or marked as bookmarks for some people. Now I was wondering, what would be the best practice SEO-wise, to do a 301 redirect to the main page if anyone try to access those pages, or a 404 and display something like "Page not Found" ? Perhaps something completely else..?

    Read the article

  • Adwords: Is there a drawback to setting a really high CPC to learn what works faster?

    - by Rob Sobers
    I'm toying with increasing my max CPC really high on all my keywords so ensure my ad gets shown in the top spot on page one in order to draw more clicks. I think this will be a good way to quickly figure out whether the ads I'm writing have a decent CTR and, more importantly, whether the landing pages I'm building are converting. Since I can set a max daily budget for my campaign, I won't risk breaking the bank. I can't think of any drawbacks, personally. Am I missing any?

    Read the article

  • What is a correct step by step logic of exporting scene with baked occlusion for loading it at runtime?

    - by myWallJSON
    I wonder what is a correct step by step logic of exporting scene with baked occlusion (Culling data) for loading that scene at runtime (on fly from the internet for example))? So currently my plan looks like this: I create prefabs Place them onto my scene (into Hierarchy) (say create 20 buffolows and some hourses and some buildings) Create empty prefab and drag all my scene objects from hierarchy onto it Export prefab So generally I put all my scene objects into one large prefab and export it but it seems that all objects that were marked as static get this property turned off when loading them at runtime and so no Frustrum Culling, and no Occlusion culling happens. So I wonder what is a correct way of exporting Sceen + Objecrts + Occlusion (and onther culing) data for future load of such scene at runtime? I wonder about current 3.5.2 Pro and future 4 Pro versions of U3D.

    Read the article

  • Google still has record of my old site URL - what to do?

    - by Mayeenul Islam
    I had a blog site, i.e. http://example2.com, then I bought a new domain, i.e. http://example.com and 301 permanent redirected example2.com to example.com. But when I get into the Google Webmaster Tools, if I get some 404, and then click into the link and see the "Linked from" tab, it shows some links like: http://example.com/post-1 http://example2.com/feed http://example2.com/post-1 According to Google, if you change your domain, just use a redirection for at least 4-6 months, but it almost passed. Then why Google has still traces of my old site? The issue is important, because I don't want to pay for the old domain anymore. I tried deleting my existing sitemap.xml and recreating it from the new site, but still such links are stored. What could I do?

    Read the article

  • Where to find my website's source code?

    - by Aamir Berni
    my company ordered a website and we were given all usernames and passwords but I can't find the PHP source files and this is my first website assignment. I have no prior exposure to web technologies although I've been programming for a decade and know computer usage inside out. I tried to use the cPanel to find .php files but there aren't any. There are no MySQL databases either. I'm lost. I'll appreciate any help in this regards.

    Read the article

  • How Google Web Starter Kit serves adaptive image for mobile?

    - by 5argon
    My website weirdly (in a good way) serves smaller images when viewed on mobile. I wanted to know what cause this? As far as I know this is not the default behaviour, so I think it must be Google Web Starter Kit's doing.Here is the debug information when debugging on device. All images became 231 B size no matter how large it actually is. (On desktop debugging the size varies.) I tried using Google Web Starter Kit (https://github.com/google/web-starter-kit) recently. The tools in it are made of Ruby, Node.js, SASS and Gulp to help you 'build' website. Pre-build you can enjoy automatic reload because the Gulp script will watch all files for you. When build it will run various tools to minify HTML,CSS and compress images. According to this page https://developers.google.com/web/fundamentals/tools/build/build_site the gulp-imagemin was used. So I guess the imagemin is doing the mobile optimization for me? What kind of compression can serve automatically resized image on mobile? And why is the size 231 B? Is this related to my screen size?

    Read the article

  • How to set up Google DFP (DoubleClick for Publishers) for a site?

    - by Manoj Kumar
    I have a website and I have an AdSense account as well. I have integrated AdSense and ads are also getting displayed (480 x 60). Somewhere I read that I can manage the ads that are being shown in my website (480 x 60) and filter out the ads on a CPM/CPC basis. NOTE: I don't have any ads to be displayed on other's websites. I just want to show other's ads on my website. Now, can I use Google DFP to manage the ads? I mean is Google DFP useful for me to filter the ads and get me more revenue?

    Read the article

  • Subdomain redirects to different server, maintaining original URL

    - by Jason Baker
    I just took over the webmaster position in my organization, and I'm trying to set up a development environment separate from the production environment. The two environments are hosted with different companies, both on shared hosting plans. Right now, production is at domain.com and development is at domain2.com What I want is to direct my development team to dev.domain.com More importantly, I don't want the URL to change from dev.domain.com back to domain2.com, but I also be responsive to page changes. For example, if my dev team navigates from dev.domain.com to a page (called "page"), I'd like the URL to show as dev.domain.com/page Is this possible, or am I just dreaming?

    Read the article

  • How to test robots.txt in googlebot to find out what is being indexed

    - by Amar Jarubula
    This question is a continuation for this answer How to check if googlebot will index a given url? As was told I did go to the Webmaster Tools and tested contents of my robots.txt file. However this is just giving me the info if that content is good enough or not. However for my scenario I need to test whether disallowing some patterns is being indexed or not. For example I have something like this below in my robots.txt disallow:/pattern* My understanding is the URLs with word pattern should not crawled, but how do I test this pattern is enforced while indexing the website?

    Read the article

  • guidline for promoting a web forum or portal

    - by Hafiz
    I am a web application developer, have developed a lot of sites, portal and apps for clients. Now I want to have own such sites that with which I can do business. But I don't know what are steps to do so. What I know is make a site or portal. But what after that? There are lot of people having so much traffic on sites built with some simple open source. Many of them are forums. I also wanted to start a forum and a web portal. I can develop that or can have open source. But what after that? Content entry and SEO? Is it all to promote a portal or site? Do SEO nowadays work ? or it is all about marketing and advertising? I have no idea about that so please tell what you guys suggest. thanks for every one's opinion in advance.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >