Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 111/216 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Pointing domain away from google

    - by redbeard
    I've already asked this question on google apps support, didnt fet an answer yet, and I am in a bit of a hurry. Need to have that website up by monday. Question: We have a domain registered at 101domain, we used a domain just for google apps, for email, and some services, and domain was never used for website. Now we pointed domain to the local hosting service, set everything like it should be, and pointed back to google from hosting service for gmail and gmail works perfectly. Problem is that we tried to set up a website at hosting service but domain still points to google apps login. Everything looks like it should, we recently bought 2 more domains that are set up the same (we use google apps on them too), the only difference tha I can think of is that we didn't use for apps first. Could this be because CNAME lags behind DNS server change?

    Read the article

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • google analytics - real-time user stats vs audience overview user stats

    - by udog
    When looking at the real-time analytics reporting for our app, it shows around 150-180 users, say around 10AM (our peak usage time). When I look at the Audience Overview report for the same day (hourly breakdown), the number of users shown for the 10AM hour is over 1000. I'm sure this has to do with some sort of aggregation, but I would like to know more about how these two numbers are calculated in order to understand it.

    Read the article

  • HTML5 media loading sometimes suspends or aborts: misconfigured Apache?

    - by Joan Botella
    Recently, some code that has been working fine for months started to run unexpectedly. That code is just a media files loading JavaScript function, that uses jQuery. It's pretty long, but in essence it is like this: var $audio=$('<audio>'); $audio.on('canplaythrough',function(e){ $audio[0].play(); }); $audio.attr('src','song.ogg'); Basically, the file only loads sometimes, and sometimes stops loading with a suspend or even an abort event. I have uploaded a little testing HTML to http://www.joanbotella.com/tests/loading , where you can see what's happening. You can download the test files from http://www.joanbotella.com/tests/loading/loadingTest.zip for local testing. I have just checked that opening the test index.html file directly into Firefox, and not through my localhost Apache server, makes the audio files perfectly playable. So, I assume, my hosting and I have the Apache server misconfigured for serving media files. My software versions are: Apache 2.2.22-1ubuntu1.7 , Mozilla Firefox 31.0 , Chromium 36.0.1985.125 and jQuery 1.11.0. Can you help me? Thanks in advance!

    Read the article

  • Is it time to add IPv6 access to my websites?

    - by Rob Hoare
    I have several dedicated servers and VPS servers, and some of those are at companies that have provided me with native IPv6 blocks (in addition to the IPv4 IP addresses). Does it currently make sense to point an AAAA record to an IPv6 address on my server, in addition to the A record pointing to the IPv4 address? This would be for (for example) the www subdomain. (the networking and web server software would be set up on the server to respond appropriately). A while ago I read that a small percentage of users (1 in a thousand?) would have slow or no access if a subdomain had both A and AAAA records because their networking software asked for one and got the other. Is that still the case, will adding an AAAA record inconvenience some users, or is the percentage already smaller and falling? In other words, is now the time to get around to adding native IPv6 support for a busy website aimed at the general public, or is it still too early?

    Read the article

  • What is the the maximum time for a user to return to Google for the visit to be flagged up as a bounce in GA?

    - by Anonymous
    I know that Google measures bounce rates by how fast a user returns to the results page after clicking-through to a website. Roughly what is the maximum duration of the visit for the user to then return for it to be considered a bounce? i.e. <5 seconds, <30 seconds? I'm mainly interested as it appears a lot of users clicking through my PPC adverts (Adwords) are bouncing, despite my ads having a high quality score and the page's being entirely related to the adverts copy and at as best tied to what I think user's may be searching for from the key phrases I've selected so the high bounce rate (100% on some keywords) seems a bit strange. If a bounce isn't determined by time, but simply whether a user returns to the SERP after visiting my site or not after any amount of time that would make more sense but the average duration of visit for my keywords with a 100% bounce rate in GA is 00:00:00, which suggests a user immediately returned to the SERPs, which again, is odd. Is my GA data being skewed by https or anything like that? Scratching my head here.

    Read the article

  • Is there a way of using HTTPS with Amazon's CloudFront CDN and CNAMEs?

    - by Metalshark
    We use Amazon's CloudFront CDN with custom CNAMEs hanging under the main domain (static1.example.com). Although we can break this uniform appearance and use the original whatever123wigglyw00.cloudfront.net URLs to utilise HTTPS, is there another way? Do Amazon or any other similar provider offer HTTPS CDN hosting? Is TLS and its selective encryption available for use somewhere (SNI: Server Name Indication)? Foot note: assuming that the answer is no, but just in the hope someone knows. EDIT: Now using Google App Engine https://developers.google.com/appengine/docs/ssl for CDN hosting with SSL support.

    Read the article

  • Open Source PHP based secure file download script?

    - by SiddharthP
    Basically I need a self hosted solution where I as the admin can create client areas (which can be simple folders) where I upload files and secure them with username / pass. A client page will then be automatically generated which the client can access the username / pass and download the files. It's relatively simple script but i'm having a hard time finding open source solutions which accomplish what i need. Any help would be appreciated.

    Read the article

  • Google suddenly only indexes https and not http

    - by spender
    So all of a sudden, searches for our site "radiotuna" give out the result as an HTTPS link. https://www.google.com/?q=radiotuna#hl=en&safe=off&output=search&sclient=psy-ab&q=radiotuna&oq=radiotuna&gs_l=hp.12...0.0.0.3499.0.0.0.0.0.0.0.0..0.0.les%3B..0.0...1c.LnOvBvgDOBk&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=177c7ff705652ec3&biw=1366&bih=602 We only use https for the download of two specific files (these urls are resources used for autoupdate functionality of an app we distribute). All other parts of the site should be served over http. We wouldn't like to see any other traffic over https, nor any of our site links to appear in search engines as https. I'd like to address this issue. It seems that the following solutions are available: hand out an https specific robots.txt as such: User-agent: * Disallow: / and/or at app-level, 301 permanent redirect all requests (except the two above) to HTTP if they come in as HTTPS. My concern with the robots method is that, say (for some reason) google decided not to index http pages, disallowing https pages might mean that google has nothing left to index with disastrous consequences for our ranking. This means I'm inclined to go with a 301 redirect. Any thoughts?

    Read the article

  • Find "secret" port number

    - by CJ Sculti
    this may be kind of an odd question. My friend has challenged me. So somehow, he change the "port" of his site to 31337. If you just go to domain.com, you get redirected to google, to access the real site you go to domain.com:31337. He is going to change it again and he is challenging me to find out which port it is. Is this possible without guessing? Hopefully someone can help! Thanks. Oh, and is this the right stack exchange site to post this on...

    Read the article

  • Is it good or bad to have dynamic content in page titles and/or description

    - by Gunjan
    In a local listing website, I append number of search results found in the description(not in title currntly) meta tag of the page as I think this is valuable for users for e.g. "Find address, phone numbers, blah blah blah for 21 outlets in locality. some more stuff after this..." as more places are added to the database, the description for the same page will change frequently. is this good or bad for SEO how about doing the same for title tags?

    Read the article

  • Optimizing data downloaded via 'link' media queries and asynchronous loading

    - by adam-asdf
    I have a website that tries to make sensible use of media queries and avoid 'expensive' CSS for users of mobile devices. My eventual goal is to make it 'mobile-first' but for now, since it is based on Twitter Bootstrap it isn't. I included some background images (Base64 encoded) and styles that would only apply to "full-size" browsers in a separate stylesheet loaded asynchronously via modernizr.load. In Firefox (but not webkit browsers) it makes it so that if you navigate away from the homepage and then return, the content (specifically, all those extras) 'blinks' when it finishes loading...or maybe I should say reloading. If, instead of using modernizr.load, I include that stylesheet via a link... in the head with a media query attribute will it prevent the data from being downloaded by non-matching browsers (mobile, based on screensize) that it is inapplicable to?

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • My First robots.txt

    - by Whitechapel
    I'm creating my first robots.txt and wanted to get a second opinion on it. Basically I have a FTP setup on my board for some special users to transfer files between each other and I do NOT want that included in the search by the bots. I also want to point to my sitemap which gets auto generated by a PHP page. So here is what I have, what else should I include, and if I need to fix anything with it? Also, it's linking to xmlsitemap.php because that generates the sitemap when called. My goal is to allow any search bot crawl the forums to grab meta data. User-agent: * Disallow: /admin/ Disallow: /ali/ Disallow: /benny/ Disallow: /cgi-bin/ Disallow: /ders/ Disallow: /empire/ Disallow: /komodo_117/ Disallow: /xanxan/ Disallow: /zeroordie/ Disallow: /tmp/ Sitemap: http://www.vivalanation.com/forums/xmlsitemap.php Edit, I'm not sure how to handle all the user's folders under /public_html/ since the robots.txt will be going in /public_html.

    Read the article

  • Track those visitors who come through a particular link

    - by busybee235
    I want to track visitors who come to my site through a particular link. For example, those visitors coming from http://www.domain.com/abc123, I can get their pageviews, time on site, bounce rate, referrer pages per visit etc. After that I can store that info into by database on daily basis. Can anyone suggest any service or api or any software for the same? I have used Google Analytics utm tags that work straight well for my requirement but I don't know how many links I can track with it. I have around 80-100 links to track a day and the number of links will be increasing. I couldn't find any documentation regarding limit of campaigns in GA. If there's no such limit, I can start this project. Thanks

    Read the article

  • Tackling thin content on an images gallery

    - by Ted Wilmont
    We run an images gallery as part of our site, however we have over 8,000 images and every image has a separate HTML page of its own to display the image caption, related image and comments from users of the site. This seems to be a problem especially with the Google Panda update because these pages are technically "thin content". What would be the best way to tackle this? We'd love some feedback and advice regarding this scenario. We have a few options we thought of already but can't decide: We could noindex the separate image pages and loose any image search listings we have for the image in favour of removing these thin pages from the index. We could 301 all of the individual image pages back to the image category listing and anchor each image (e.g. #img2122) and include all of the comments and description on the category listing page itself. If we was to simply list all of the images and content on the category pages themself; what's the best method? We could add all of the content in the anchor tags and use jQuery to display them in a box when a user clicks on the image or we could use Ajax to retrieve the information. However, what's the best Ajax method for SEO? Any ideas, suggestions, tips or advice is greatly appreciated and thank you in advance for any given.

    Read the article

  • Redirect pages to fix crawl errors

    - by sarah
    Google is giving me a crawl error for pages that I have removed like www.mysite.com/mypage.html. I want to redirect this pages to the new page www.mysite.com/mysite/mypage. I tried to do that by using .htaccess but instead of fixing the problem, the crawl pages increased and a new crawl came www.mysite.com/www.mysite.com. This is my .htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /sitename/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /sitename/index.php [L] </IfModule> # END WordPress Should I add this after the rewrite rule or I should do something else? RewriteRule ^pagename\.html$ http://www.sitename.com/pagename [R=301]

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Is It bad for SEO to have internal redirected links? [closed]

    - by Jonas Lindqvist
    I have a large number of pages having similar but not identical content. Example: site.com/dream_dictionary_flying and site.com/dream_interpretation_flying. The problem is that although not being identical, they are sometimes on the edge of being duplicate content. The solution via redirect 301 in htaccess is simple and can be done in a minute, BUT, changing all existing links on the whole site from "/something" to "/something_else" would take ages, it would be thousands of manual changes taking x hundreds of hours. My question is this; is it bad for SEO to have internal links that are redirected, or rather HOW bad is it? For the human user it would not matter at all but from what I have experienced, the search engines don't like it. Is there any rule of thumb here? Please come back with your thoughts and experience on this. Thanks!

    Read the article

  • PHP accessible shared content between two websites on the same VPS on different domains/IPs

    - by Lee Fentress
    I have two ecommerce websites, selling music digital downloads, on the same VPS, currently using cPanel/WHM (but thinking of switching to Virtualmin). They have separate domains and IPs of course. They both share from the same set of music files, so I have duplicate copies in each website directory, which takes up a lot of disk space. How might I go about sharing the same set of music files across both sites, allowing PHP access, so that it does not break my shopping cart's functionality of serving customers the downloads after they have paid for them? I thought of maybe using symlinks or something, but I don't know if it's possible, or if it would have to somehow circumvent built-in security features of the server. I'm new to VPS management.

    Read the article

  • Very few visitors on Analytics: incorrect setting?

    - by Akaban
    it's quite a long time Analytics is making me crazy: I have a 2 years old website, started with Aruba (an Italian provider) and then transfered on Hostgator. It's a blog Wordpress + MyBB forum, and on both the platforms I've the Analytics code in the footer. The problem is that the stats on Analytics are simply ridiculous compared to the numbers reported by the Aruba (before) and Hostgator (then). I think that the numbers of Aruba/Hostgator are correct, simply because just the daily users connected on the forum is higher than the Analytics numbers. I know it's a really confused request, but maybe you can help me to understand what's the problem.

    Read the article

  • Is AdWords ad blocked from top spots of SERPs until it is reviewed?

    - by Omeoe
    I have an AdWords ad and a keyword with a Quality Score of 10. Inferring from CPC from actual clicks, max CPC is set way beyond that of the third advertiser from the SERP for this keyword (there are three ads in the top). Still the ad is shown on the 4th spot which located either on the right or at the bottom of the SERP. The only catch is that the ad's status is "under review". Is it the reason why it's blocked from the top spots?

    Read the article

  • Creating advanced website by redirecting and replacing content from Google Sites

    - by David
    I would like to create a corporate website with members area. Importantly, I want many novice webadmins to be able to modify static content themselves. Therefore, I got the idea to create the site using Google Sites and insert elements with width and height in places where I want dynamic content. The website would be read using PHP on a different server and the marker elements would be replaced with dynamic content created by PHP. What would be the drawbacks of this approach?

    Read the article

  • joomla subscription ecommerce solution

    - by CQM
    I am using Joomla and want to implement a subscription based ecommerce solution. I started using VirtueMart and noticed its shortcomings regarding subscription based recurring items. Turns out Virtuemart can take its own plugins and I found something called SimSu which I don't want to pay for. I feel like I have taken a nuke to a knife fight, as I only need one or two subscription based products on this site. They will be recurring payments. Is anyone familiar with a solution for this issue?

    Read the article

  • How frequently Googlebot fetch sitemaps? Is it depending on page rank?

    - by JITHIN JOSE
    How much frequently google fetches sitemaps? I am now working with a high traffic website normally have 30 new posts per minute.But currently it provides sitemaps which links to new 100 posts(3 minutes). Is this method is enough ?. Is Bots fetch sitemaps every 3 minutes?. Did need to change sitemaps to list all 5M posts(indexed sitemaps)?. How this change will effect on traffic and page rank. Is google bot remove urls that previously listed on sitemap but not now?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >