Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 126/216 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • My First robots.txt

    - by Whitechapel
    I'm creating my first robots.txt and wanted to get a second opinion on it. Basically I have a FTP setup on my board for some special users to transfer files between each other and I do NOT want that included in the search by the bots. I also want to point to my sitemap which gets auto generated by a PHP page. So here is what I have, what else should I include, and if I need to fix anything with it? Also, it's linking to xmlsitemap.php because that generates the sitemap when called. My goal is to allow any search bot crawl the forums to grab meta data. User-agent: * Disallow: /admin/ Disallow: /ali/ Disallow: /benny/ Disallow: /cgi-bin/ Disallow: /ders/ Disallow: /empire/ Disallow: /komodo_117/ Disallow: /xanxan/ Disallow: /zeroordie/ Disallow: /tmp/ Sitemap: http://www.vivalanation.com/forums/xmlsitemap.php Edit, I'm not sure how to handle all the user's folders under /public_html/ since the robots.txt will be going in /public_html.

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Usefulness of the Backlinks shown in Webmaster Tools

    - by Ewan Heming
    Is the list of links for a site shown in Google Webmaster Tools a complete list or just a sample? I've noticed that the links in there appear to be all the ones I didn't think would have any real value - either because they were nofollow or from irrelivant sites. The few I did think would be some use have never shown up and there's also some links that are sometimes there and sometimes not (such as my linkedIn profile). Does this mean that the missing links don't/no longer carry any value? It almost appears that the list is there for Google to either inform you about problems (there was a useful list there when someone tried to SPAM my site) or mis-imform you about which link-building strategies work or not (to keep people guessing about what works or not).

    Read the article

  • High quality/performance shared hosting (in northern Europe)

    - by Bente
    I work as a web developer on almost all levels. However, my typical customer is a 1-5 guys running some sort of consulting business. They have (or want) a web page with some kind of CMS so the can perform most (or all) editing themselves. I normally opt for Concrete5 as my default CMS because it's the most user friendly (and free) CMS I have found. My good recurring customers I host on my own server as a service, but I need a good host for the customers where I want to deliver a product and not be responsible for whatever may happen in the future. However, I still struggle with hosting! Experience shows that the typical ~1$ shared hosting is waaay to slow to run concrete5 smoothly, and a VPS is out of the question because I don't want to maintain it. So, where can I find as fast (from northern Europe), reliable, shared host where I can put a site and don't have to worry about the server going down or being unmaintained. I expect this should cost around $10-$20 but I'm open to all kinds of suggestions because different customers have different budgets.

    Read the article

  • Redirect error in Google Webmaster Tools report

    - by Aurelio De Rosa
    I built a CMS and I used it to create the following website http://www.tkdmontecatini.com . After some days, Google Webmaster Tools started to give me several "Redirect error" on some pages like the follows: http://www.tkdmontecatini.com/it/photogallery http://www.tkdmontecatini.com/it/pagina/9/Informazioni/Corsi/Chi-Siamo http://www.tkdmontecatini.com/it/pagina/2/Informazioni/Eventi/Eventi The funny things are: If I access those links from a browser, it's all right and I've not redirect loops or other similar issues If I use the "Fetch as Googlebot" function, I get a great "Success" result Question: Any idea of why this happens and how can I fix it?

    Read the article

  • After changing web host, I get a 'file does not exist' error

    - by Jordan
    I run a WordPress blog, and have recently changed web hosts. When changing web hosts, I copied all files and exported/imported the database etc as explained by lots of tutorials found easily on Google. The blog home page works fine. What goes wrong: When I click on any link from the home page, the browser gets stuck in a redirect loop. Looking at the error log, I see: File does not exist: /usr/local/apache/htdocs/index.php The directory /usr doesn't even exist for my website - so perhaps this is looking for a file that was present using my old Web Host and is no longer present with my new web host? What is going on, and how might I resolve it?

    Read the article

  • PHP accessible shared content between two websites on the same VPS on different domains/IPs

    - by Lee Fentress
    I have two ecommerce websites, selling music digital downloads, on the same VPS, currently using cPanel/WHM (but thinking of switching to Virtualmin). They have separate domains and IPs of course. They both share from the same set of music files, so I have duplicate copies in each website directory, which takes up a lot of disk space. How might I go about sharing the same set of music files across both sites, allowing PHP access, so that it does not break my shopping cart's functionality of serving customers the downloads after they have paid for them? I thought of maybe using symlinks or something, but I don't know if it's possible, or if it would have to somehow circumvent built-in security features of the server. I'm new to VPS management.

    Read the article

  • How to point one sub-domain to another sub-domain and they can be used interchangeably

    - by Talon
    I'm trying to do this secure.domain2.com -loads content from- secure.domain1.com So if somebody goes to secure.domain2.com it will load the content of secure.domain1.com Note that I don't want a redirect, so if someone goes to secure.domain2.com in the address bar it will still say secure.domain2.com even though it's loading content from secure.domain1.com I've read that it's possible with a CName or something like that, what is the best way to do that?

    Read the article

  • How can I get cross-browser consistent behavior for TR heights within a table with a set height? [migrated]

    - by Dan
    I have an arbitrary number of tables with an arbitrary number of rows in each, and all tables are the same height. My initial approach was to just set the overall height of the table and hope the rows were smart enough to distribute themselves appropriately. That's not the case. I have 4 different behaviors going on with 4 browsers, but I need them to all render at the very least in a similar way. Safari & Chrome (WebKit): All rows are equal height, creating scroll bars as needed and fitting within table height. Firefox: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Additionally, If the content of the rows does not take up all of the height, only the part of the table with content in it takes the background (though it seems, through use of Firebug, that the actual table [and TR] extend to the bottom of the proper table height). IE: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Obviously this only includes one version of each browser and additional variation would likely appear with more being tested. Ideally, a solution where the browser renders TRs with less content smaller than those with larger content, while still using scrolling within the variable height TRs when the overall height of the table is not enough would be optimum. I could potentially see a solution to achieve that with JS, but can it be done with CSS? Or, if not, can the behavior that WebKit displays be made to work across the browsers? Thanks! PS: Example can be found here.

    Read the article

  • Google analytics and 301 redirects

    - by Ilian Iliev
    We have a multi language website and the first page redirects to specific language page using 301 redirect based on some logic. For exmaple: http://mysite.com/ redirects to http://mysite.com/en/ The problem is that these redirects destroy the primary request so we do not get correct results for traffic sources in GA. How do you handle this case? Is there something that we can do? Any ideas will be appreciated

    Read the article

  • Single use download script - Modification [on hold]

    - by Iulius
    I have this Single use download script! This contain 3 php files: page.php , generate.php and variables.php. Page.php Code: <?php include("variables.php"); $key = trim($_SERVER['QUERY_STRING']); $keys = file('keys/keys'); $match = false; foreach($keys as &$one) { if(rtrim($one)==$key) { $match = true; $one = ''; } } file_put_contents('keys/keys',$keys); if($match !== false) { $contenttype = CONTENT_TYPE; $filename = SUGGESTED_FILENAME; readfile(PROTECTED_DOWNLOAD); exit; } else { ?> <html> <head> <meta http-equiv="refresh" content="1; url=http://docs.google.com/"> <title>Loading, please wait ...</title> </head> <body> Loading, please wait ... </body> </html> <?php } ?> Generate.php Code: <?php include("variables.php"); $password = trim($_SERVER['QUERY_STRING']); if($password == ADMIN_PASSWORD) { $new = uniqid('key',TRUE); if(!is_dir('keys')) { mkdir('keys'); $file = fopen('keys/.htaccess','w'); fwrite($file,"Order allow,deny\nDeny from all"); fclose($file); } $file = fopen('keys/keys','a'); fwrite($file,"{$new}\n"); fclose($file); ?> <html> <head> <title>Page created</title> <style> nl { font-family: monospace } </style> </head> <body> <h1>Page key created</h1> Your new single-use page link:<br> <nl> <?php echo "http://" . $_SERVER['HTTP_HOST'] . DOWNLOAD_PATH . "?" . $new; ?></nl> </body> </html> <?php } else { header("HTTP/1.0 404 Not Found"); } ?> And the last one Variables.php Code: <? define('PROTECTED_DOWNLOAD','download.php'); define('DOWNLOAD_PATH','/.work/page.php'); define('SUGGESTED_FILENAME','download-doc.php'); define('ADMIN_PASSWORD','1234'); define('EXPIRATION_DATE', '+36 hours'); header("Cache-Control: no-cache, must-revalidate"); header("Expires: ".date('U', strtotime(EXPIRATION_DATE))); ?> The http://www.site.com/generate.php?1234 will generate a unique link like page.php?key1234567890. This link page.php?key1234567890 will be sent by email to my user. Now how can I generate a link like this page.php?key1234567890&[email protected] ? So I think I must access the generator page like this generate.php?1234&[email protected] . P.S. This variable will be posted on the download page by "Hello, " I tried everthing to complete this, and no luck. Thanks in advance for help.

    Read the article

  • Evidence for automatic browsing - Log file analysis

    - by Nilani Algiriyage
    I'm analyzing web server logs both in Apache and IIS log formats. I want to find the evidence for automatic browsing, like web robots, spiders, bots, etc. I used python robot-detection 0.2.8 for detecting robots in my log files, but I know there may be other robots (automatic programs) which have traversed through the web site but robot-detection can not identify. So I want to ask: Are there any specific clues that can be found in log files that human users do not leave but automated software would? Do they follow a specific navigation pattern? I saw some requests for favicon.ico - does this implicate that it is a automatic browsing?. I found this article and this question with some valuable points.

    Read the article

  • Do search engines rank internal redirects negatively?

    - by siverd
    A client is in the late stages (code complete) of a website redesign and unfortunately hasn't implemented 301 redirects to point high traffic pages to the new URL's. As I understand it our only option at this point is to create redirects within the CMS. Our CMS allows us to do this: www.mysite.com/category/current-page.html will redirect to www.mysite.com/new-category-name/new-page.html The site now uses custom logic on our 404 page to check this list of redirects and if one exists forwards the user to the new-page.html I understand that using 301 redirects would be the correct way to maintain our page rank but I think that would require a code change which isn't possible. Question How will search engines respond to this? Will they wait until the redirect happens and allow us to keep our page rank (authority, trust, etc) or will they see the 404 page and down-rank us? Worst case...will they make our new-page.html start from a rank of "0"? Thanks for your help.

    Read the article

  • SEO on an existing platform

    - by Simon
    I'm given the task to increase user visits and conversions on for a recruitment website. Conversions would be interested job seekers submitting their CV. The manager would first like to increase the organic search results and optimize the website before starting with targeted campaigns. The problem is, they are using a proprietary recruitment software platform which I can barely add changes to. For example, the url's all look like dynamic url's without any semantic meaning and the markup is almost completely build automatically by that platform. I'm also confident that the lack of submitted CV's is due to a bad user experience of the website (no incentives or clear CTA to register) Besides optimizing the static texts and page titles, is there anything I can do? Thanks

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

  • PHP W3 Validator API, Is this good? [closed]

    - by Josh Purcell
    I was trying to find a way to see if my site's code was valid or not but I continuously going over to W3 Validator so I decided to make an "API" however it really isn't! I just wanted to know if anybody can find a better solution to the one I have made. This is what I currently use, with the usage of ?uri=http://www.mydomain.com : <?php if(!$_GET['uri']) { echo "No URI!"; } else { $CheckURI = "http://validator.w3.org/check?uri=".$_GET['uri']; $URL = file_get_contents($CheckURI); $Start = strpos($URL, "<title>") + 7; $End = strpos($URL, "</title>"); $Title = substr($URL, $Start, $End-$Start); if(preg_match('[Invalid]',$Title)) { //Code is INVALID echo "<a href='$CheckURI' title='This is not good!' target='_BLANK'>INVALID Source</a>"; } elseif(preg_match('[Valid]',$Title)) { //Code is VALID echo "<a href='$CheckURI' title='Check It Yourself!' target='_BLANK'>Valid Source</a>"; } else { //It Went WRONG echo ""; } }

    Read the article

  • Your Most Popular Sites Screen in IE 10 - Icons not appearing

    - by MJWadmin
    We use the following code to add icons for favicon, tablets, smartphones, windows 8 tiles and the like:- <link rel="apple-touch-icon" href="apple-touch-icon.png"> <link rel="shortcut icon" type="image/x-icon" href="/favicon.ico"/> <link rel="apple-touch-icon-precomposed" sizes="144x144" href="apple-touch-icon-144x144-precomposed.png"> <link rel="apple-touch-icon-precomposed" sizes="114x114" href="apple-touch-icon-114x114-precomposed.png"> <link rel="apple-touch-icon-precomposed" sizes="72x72" href="apple-touch-icon-72x72-precomposed.png"> <link rel="apple-touch-icon-precomposed" href="apple-touch-icon-precomposed.png"> <meta name="msapplication-TileImage" content="apple-touch-icon-144x144-precomposed.png"/> <meta name="msapplication-TileColor" content="#17151a"/> Unfortunately this doesn't seem to work for IE9 and IE10's 'your most popular sites screen', google searches have been un-enlightening. Stack uses <link rel="apple-touch-icon" href="apple-touch-icon.png"> which seems to work for it, but not for us. Any clues to a solution appreciated.

    Read the article

  • displaying first few pages of a pdf on a page = duplicate content?

    - by Ace
    I am embedding scribd pdfs on my website. These are exam papers pdf which are available on other websites. As it is scribd is an embed/iframe, I think google considers my page as being empty with no content; google does see iframe content right? So I decided to display the first pages of the pdf as text on the page for google. Then, for user experience, i hide the text and replace it with the scribd embed code using javascript. I have 2 worries about this method. Firstly, i am displaying the first pages of the pdf and the latter may be hosted on other websites, will this be considered as duplicate content. Secondly, I am hiding the content and replacing it with the scribd embed with javascript; is it considered bad by google?

    Read the article

  • what's wrong with my sitemap?

    - by cadrian
    Google Webmaster Tools says that my sitemap is in a wrong format. I don't see my mistake, I think I followed every guideline provided by Google. Can someone help me? <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://www.vocalcontraste.fr/</loc> </url> <url> <loc>http://www.vocalcontraste.fr/presentation/</loc> <lastmod>2012-01-30T21:49:06+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/les-concerts/</loc> <lastmod>2012-12-13T21:55:00+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/ecoutez-contraste/</loc> <lastmod>2012-07-13T18:19:45+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/repertoire/</loc> <lastmod>2012-07-13T17:30:14+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/la-presse/</loc> <lastmod>2012-07-11T08:17:48+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/recrutement/</loc> <lastmod>2012-02-01T22:22:03+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/nos-partenaires/</loc> <lastmod>2012-07-11T07:43:31+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/contactez-nous/</loc> <lastmod>2012-02-02T19:01:32+01:00</lastmod> </url> </urlset>

    Read the article

  • Buying a custom domain for blogger

    - by John Demetriou
    I am about to move my blogger site to a custom domain. I do all the steps as told but whenever I find the perfect custom domain (that is free) I get redirected to google apps for bussines... Is it a necessity to get Google apps for business before buying a custom domain? If I only start a free trial of Google apps for business when the trial period expires will my custom domain domain still be valid?

    Read the article

  • International TLD's vs. duplicate content

    - by Litso
    Hey all, I currently work at a pretty big website that has visitors from around the globe. My job is to help out on the SEO, and one thing we've been discussing lately is the use of international TLD's. The ones we use range between: (partly) translated websites like .es and .de that serve most of the content in the country's language non-translated (english) websites for non-english languages (due to a lack of translations) like .ro and .cz english websites for english speaking countries with localized TLD's (.co.nz, .co.uk) On one hand I really have the feeling this is causing a lot of duplicate content, especially for the last two categories of TLD's. On the other hand though it seems a lot like country-specific TLD's tend to score a lot better in that country's Google. Would it be advisable to keep on using these domains, or should we canonicalize them all to the .com version?

    Read the article

  • Google Maps API: Premier License or excess map loads?

    - by j0nes
    I am currently looking for a way on how to deal with the Google Maps API usage limits. I am planning a redesign of our page that will probably get around 2 million map loads per month. This will surely break the usage limit of 750000 map loads per month available in the free version. If we pay for excess map loads, this means we would have to pay 5000$ per month. The other option would be to use a Premier license, however there is very few information available on the usage limits for this and the price. I have filled the request form to get a custom offer from Google, but I did not get any response yet. Can anyone of the Premier license holders tell me which option will be cheaper for my usage pattern, paying for Premier license or paying for excess map loads?

    Read the article

  • remove ssl from Google search results

    - by user73457
    I am the webadmin of a WordPress site that serves up http pages statically. The problem is that some of the pages are shown as https in Google search results. For instance, if the search term "Example Press Kit" is entered the search result site link comes up as: https://example.com/presskit/ We don't have a site ssl certificate, so surfers are being bounced. I have tried everything. Most recently I created a new website in Google WebAdmin for the https version of our home page. Then, I added sitelinks that should have redirected site links intended for https://example.com/* to http://example.com/*. But it doesn't work! Google still shows a dead link to http://example.com/presskit. I didn't think dead links lasted very long on Google results, but there they are, two weeks later. Any ideas?

    Read the article

  • Question on business connections and page rank?

    - by Viveta
    I just want to ask this question to get a yes no answer on something that I've been wondering on lately. So regarding how there are countless numbers of sites now that use the no-follow; making it harder to get ranking for your page if your website information might be something useful and will get traffic but maybe isn't something that your business connections share content of; but I am trying to find out if the benefit to having a bunch of say "likes" to your facebook page, but all the connection to your website's content isn't passing any benefit to your main page. So are you then competing with your own website in regards to SERPs to your facebook page and that of your home page. Am I correct on this; that if you start having your facebook page doing real good as far as connections and likes (helping bump up your facebook PageRank) but if you have links on your page with certain optimized keywords, that there is no benefit to your website (other than people getting to your facebook page, and then more likely to click to your page). Hope I explained it well what I am asking. Just wanted to get a better picture of this to know what I want to focus on as far as how I'll be linking to my desired landing pages in the future.

    Read the article

  • Affect on speed of wordpress membership plugins -- currently trying s2member [migrated]

    - by Richard
    I'm taking a look at s2member -- I have it running, and my site is very slow -- it's taking on average about 9 or 10 seconds to load. This is the site: http://richardclunan.net I want to figure out if the s2member plugin is causing it to be slow. And whether there are other faster membership plugins... 3 questions: Are there particular settings or things specific to s2member that I should take care of to ensure s2member doesn't make my site slow? If I deactivate the plugin to test the speed of the site with the plugin deactivated, will that mean I'll have to respecify s2member settings when I reactivate it? After it's reactivated will members' accounts work ok? Anybody have observations on s2member or other wordpress membership site plugins and their affect on site speed?

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >