Search Results

Search found 9717 results on 389 pages for 'gkt pro'.

Page 88/389 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Where can I find comscore rank?

    - by Joyce Babu
    Recently one ad network rejected my registration stating that my site doesn't match their minimum monthly impressions, even though the site serves thrice the required page views. When I contacted them for details, their representative hinted that they are using comscore data for screening submissions. Where can I view my site's comscore ranking and details? Update I was able to find the traffic by tagging my site with comScore Direct.

    Read the article

  • Recovering domain name from a person I can't find

    - by Daniel Gruszczyk
    I have a problem with one domain and I have no idea how to go about it. I am volunteering for a small charity in Sheffield (UK), more specifically I am redoing their website. A while ago (few years) there was one guy who made that website for them, sorted out a free hosting with another charity, bought domain name etc. Since the domain name is registered in his name, and he disappeared and we have no way of finding/contacting him, we can't move it to different hosting or do virtually nothing about it. Somehow the domain is being renewed every year, we know which domain registration service provider it is registered with, we know the guys name, and that's about it. How would we go about re-registering that domain in the charity's name, instead of that guy, is that at all possible? If we happen to get in touch with him, what should we ask for? Thanks for your help.

    Read the article

  • Stop bots from crawling old links with extensions

    - by Jared
    I've recently switched to MVC3 which is extension-less for the URL's, but Google and Bing have a wealth of links that they are crawling which no longer exist. So I'm trying to find out if there is a way to format robots.txt (or by some other method) to tell google/bing that any link that ends in an extension isn't a valid link... Is this possible? On pages that I'm concerned about a User having saved as a fav I'm displaying a 404 page that lists the links to take once they are redirected to the new page (I decided to not just redirect them as I don't want to maintain these forever). For Google/Bing sake I do have the canonical tag in the header. User-agent: * Allow: / Disallow: /*.* EDIT: I just added the 3rd line (in text above) and it APPEARS to do what I'm wanting. Allow a path, but disallow a file. Can anyone confirm this?

    Read the article

  • Parsing google site speed in analytics

    - by Kevin Burke
    I'm having a hard time making heads or tails of the Site Speed graphs in Google Analytics. Our site speed is fluctuating wildly from month to month, despite a large sample (the report is "based on 100,000's of visits) and a consistent web set up (static files served from an EC2 instance running nginx behind a load balancer). Here's our site speed, with each datapoint representing a week worth of data. Over this time period we modified our source and HTTP headers to increase our cache hits on static resources by 5x. Why would it fluctuate so much? Is there any way to get more reliable information from those graphs?

    Read the article

  • A mechanism to include site title in every page, but not in <title> element

    - by Saeed Neamati
    Each site can have a name. For example, site x. Each page also can have a name (or a title) that should appear in <title> tag in the header. However, many websites out there use the combination site name - page name to provide the value for <title> tag. I find it a little far from being semantic. On the other hand, if you only include page title in <title> tag, search engines won't find your site by its name. For example, if your site's name is Thought Results and you don't include it in page titles, then if you search for Thought Results, you won't find your site in SERPs. Thus I'm searching for a mechanism to both include site title (not page title) in every page, and also only include page title in <title> tag to get more semantic results. Is there any way to achieve this?

    Read the article

  • Do subdomains need to be defined through domain registerar?

    - by Johnny
    I have bought a new domain name from GoDaddy. Let's say it is abcd.com. On GoDaddy's DNS Managing page, I changed A(Host) part to @ = 74.125.232.215 which is www.google.co.uk's IP address. Now if I type www.abcd.com, it directly goes to www.google.co.uk. But if I type http://test.abcd.com, it cannot be loaded. Do I need to define every subdomain through GoDaddy? Is this how it work? P.S. Amazon EC2 directly generates a subdomain for users to reach their virtual PCs. It cannot be domain registerar dependant. P.S.2. Same question for using "www2" at the start of url.

    Read the article

  • Root Domain Redirects Incorrectly To Https instead of to WWW

    - by Ari
    TL;DR - Why do visits to my website homepage work without "www", but not to specific pages on it? I recently moved my website (Zappable.com) to a new webhost, RedHat's OpenShift (a PAAS). It requires using Cname records to setup custom domains, something my domain name registar (1&1) does not support without a hosting plan. So instead I setup Cloudflare in-between my domain and web host, and setup a Cname record on it. I then pointed a 1&1 "www" sub-domain to CloudFlare, and then pointed my 1&1 root to "www" sub-domain. This works fine for visiting to my homepage, but for some reason it does not work when visiting a specific page without "www". Instead of adding "www", it goes to HTTPS, which is strange.

    Read the article

  • Are there any guidelines for laying out screen "real estate?"

    - by Corey
    I'm wondering if there is any information about creating a decent page layout so that your website will appeal to users of all resolutions. For example, the optimal width for pages. It seems like on my resolution, most websites have their content centered and covers about 80% of the page, which is easy on the eyes. Or maybe the height of the website's logo/header -- some sites I stumble upon have a huge logo with links or navigation under it, making it so that I need to scroll down to see the actual content, like articles or images (these sites don't keep me for very long). I understand that every user is different and may have browser extensions, page zoom or may be running some ancient system that displays in 640x480. I'm not looking for a "best" solution, but rather, some guidelines about designing to accommodate different resolutions. Basically, how can I make sure that I don't design a page where a paragraph might display in several easy-to-read lines on my resolution, but it turns into a single line on a 1920x1080 resolution and makes it hard for the user to follow?

    Read the article

  • Can I enable GZIP on Godaddy?

    - by Dave
    One of my sites lives on GoDaddy's bottom-of-the-line cheap hosting, I have the correct code in my .htaccess file, but it's not compressing because mod_deflate is not loaded. How do I enable that? The best I've found is this article which suggests I use PHP to zip everything (which is going to be more work than just changing hosting companies): " Add the following code to the very top of your Web pages above the DOCTYPE: if (substr_count($_SERVER['HTTP_ACCEPT_ENCODING'], 'gzip')) b_start("ob_gzhandler"); else ob_start();

    Read the article

  • What liability concerns do advertising vendors raise, and how can I address them?

    - by Beofett
    One of the websites I administer wants to provide free advertising in the form of direct links to vendors at an event they are running. Up until now, there has been no advertising whatsoever on the site (or any of our other sites). The site is for a for-profit business. The idea of implicit endorsement of any vendors we advertise has been raised, which brought up the question of what we need to do, if anything, to protect ourselves from any potential problems such endorsement might create. I know that many sites have clauses in their Terms of Service that state that (in a nutshell) they are not responsible for any problems or grievances between the visitors to the site and any vendor advertised or linked. Are there other steps that a website typically takes when considering advertising, such as getting the advertiser to provide some sort of certification that their ad will not violate any trademarks or copyrighted material?

    Read the article

  • I need a webpage to host my javascript!

    - by Amir Reza
    Does anyone know a website that hosts javascripts on their page? I have a research project that needs to collect some RTT from all over the world and compare them together. I have written the javascript code for that but I do not have a high hit rate website to put it on to collect data. I know it is a little bit odd question to ask, but do you know any website or any trick that can help me? Note that the script would not do any harm to anybody! :-) Thanks, Decad is right, I basically need some people to put my script on their "high-hit rate" website ... so I can collect data from large number of clients... Of coarse, the script is run on the background with no harm to the page. It basically measures some RTT and submit it to a server. I already have some pages, but they barely got a hit from outside! Thanks,

    Read the article

  • DNS slows down on development environment

    - by Sequenzia
    I have a local development environment setup on my Mac. I am running an Ubuntu Web Server inside of a Virtual Box VM. I setup a host file on my Mac that points my dev site to the IP of the Ubuntu Virtual Server. Everything works good other than the fact a lot (not all) of the time it takes more than 5 seconds to load a page. I used firebug to track down where the problem is and when it's slow the DNS part of my request is taking over 5 seconds. Like I said it's not all the time. Sometimes it resolves and loads the page within milliseconds. The same page one click will be super fast and then the next time it takes over 5 seconds. It's really slowing me down and I am not sure what is causing it.

    Read the article

  • Does the keyword blog in url impove seo?

    - by slow diver
    I have seen a couple of site which has high number of hits. They are mostly tutorial sites and blogs that address software issue/errors. I wonder if the kewybord "blog" has a very positive effect in SEO? In my own site, I have install word press in root folder to avoid any blog keyword. I also did this to keep urls shallow (deeper url are not good for SEO). But I may want to think on it again. The sites I am referring too are http://blog.sqlauthority.com http://veerasundar.com/blog/2011/11/making-xampp-to-serve-any-directory-outside-htdocs/ I know there are standard (sort of) class names or ID that identify different contents and makes it easier for the search engine to identify contents like, "container", "menu". The use of word "blog" would mean this is about dicussing/tutoring something and have a very positive effect on SEO?

    Read the article

  • Email provider - suggestions needed

    - by Christian Fazzini
    We are looking for a good way to have email support. In theory, we need to allow end-users to send emails directly to support and careers. i.e. support@domain_name_here.com and careers@domain_name_here.com. Second, we need to provide emails to our staff. So each staff member has their own email address. i.e. joe@domain_name_here.com, meghan@domain_name_here.com, etc. Google Apps is one that we are considering. However, they are charging $50 per user, per year. Not so bad, considering the quality and the features they offer. However, there are also cheaper alternatives. i.e. my domain registrar offers an email plan for $20 / year / 10 emails. Go Daddy has a number of plans and still a lot more affordable than Google Apps. So far Namecheap and Go Daddy are the only ones I have looked at for email plans. Is it worth signing up with Google Apps? Or are there better alternatives? Your thoughts?

    Read the article

  • Shopping Cart URL Structure

    - by Drew
    In regards to URL structure when it comes to guests and authenticated users, am I able to track traffic associated with both paths, but at the same time track total conversions going through the shopping cart? I have set up the following URL structure: Authenticated users follow this path: /cart /checkout /checkout-confirmation-ty Guests go like such: /cart /checkout-guest /checkout-confirmation-guest-ty can I track the authenticated and guest users separately? is this possible with Google Analytics?

    Read the article

  • Css menu hovering background color problem

    - by Guisasso
    i have a very simple question for many, but complicated enough for me. I have tried to fix this for the last hour with no luck. I downloaded a css menu, and made all the modifications needed to me with no issue, but there's one thing that i'm having no luck trying to fix: When hovering over "ccc" (for example), and going down to 1 for example (this happens to all other cells) the black background doesn't extend all the way to the right. Here's the link: http://riversidesheetmetal.net/gui/questions/aaa.html

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • Web developer has become uncooperative, what should we do to rescue our site? [closed]

    - by TOM
    What can an individual or a company do if the web designer who has designed their website becomes completely uncooperative? In our case He refuses to meet with us for discussions. He refuses to give us training on the effective use and management of the website he has designed for us (and has been paid in full for) despite this training being part of the original contract. Last year he disappeared for a number of months ,refused to answer emails or phobe calls and was totally unavailable to help us He refuses to give us the details of the hosting of our website. We have totally lost faith in this arrogant and unreasonable guy and would like to break off all relationships with him but ,it appears, he's got us over a barell

    Read the article

  • Can't create site connection in Contribute. Adminstrator account removed from domain

    - by tribus
    I have been tasked with administering several sites through Contribute. The previous administrator has since left and his domain account was removed. I believe I had been set up as an additional admin on these sites previously. However, when I attempt to create a connection through "My Connections" in Contribute using SFTP and my credentials, I get the following error: I have verified that I have read/write access to these sites through FTP. How can I take control of these sites through Contribute so that I can start administering them? Is this related to the removal of the previous admin's account?

    Read the article

  • 301 url rewrite loop

    - by anyvendetta
    I need to do a 301 rewrite to force all urls to become lowercase i put in htaccess (RewriteMap lc int:tolower in httpd.conf) RewriteCond %{REQUEST_URI} [A-Z] RewriteRule . ${lc:{REQUEST_URI}} [R=301,L] Everything works just fine except to urls with subcategories which in this case are: /category-1256-Product-page-example.html the numer 1256 refers to a "subcategory" So when i try to access /category-1256-Product-page-example.html gives me a loop error message I think another redirect rules are making the loop but dunno how to fix it because are just this urls rewrite rules that don't work with the above rewrite. Rewriterule ^main-site-url/category-([0-9]*)-([-_a-zA-Z0-9]*)\.html$ /subcategories.php?idcategory_main=1&idcategory=$1&category=$2 [L] Rewriterule ^main-site-url/([0-9]*)-([-_a-zA-Z0-9]*)-([0-9]*)\.html$ /file.php?idcategory_main=1&idsubcategory=$1&product=$2&idproduct=$3 [L]

    Read the article

  • Meta description not displaying in custom site search results page

    - by Stephen Connolly
    We have Google Custom Site Search implemented on our company website. When I'm looking at the results page, I noticed that the Meta Description is not being displayed. It just seems to be reading the links titles from our drop down menu and using this as a description. When I search for the same page via google.com, the meta description is pulled in correctly. Any thoughts why this might be happening. I can't see anything in the Custom Site Search settings.

    Read the article

  • Remove IP address from the URL of website using apache

    - by sapatos
    I'm on an EC2 instance and have a domain domain.com linked to the EC2 nameservers and it happily is serving my pages if I type domain.com in the URL. However when the page is served it resolves the url to: 1.1.1.10/directory/page.php. Using apache I've set up the following VirtualHost, following examples provided at http://httpd.apache.org/docs/2.0/dns-caveats.html Listen 80 NameVirtualHost 1.1.1.10:80 <VirtualHost 1.1.1.10:80> DocumentRoot /var/www/html/directory ServerName domain.com # Other directives here ... <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> </VirtualHost> However I'm not getting any changes to how the URL is displayed. This is the only VirtualHost configured on this site and I've confirmed its the one being used as I've managed to break it a number of times whilst experimenting with the configuration. The route53 entries I have are: domain.com A 1.1.1.10 domain.com NS ns-11.awsdns-11.com ns-111.awsdns-11.net ns-1111.awsdns-11.org ns-1111.awsdns-11.co.uk domain.com SOA ns-11.awsdns-11.com. awsdns-hostmaster.amazon.com. 1 1100 100 1101100 11100

    Read the article

  • Web Hosting Backup/Disaster Recovery Plan - Which Company?

    - by Harry Muscle
    I've been asked to look after consolidating all of our various company websites onto one host and also provide a disaster recover plan in case the chosen host goes down/out of business/etc. We're most likely going to go with HostGator as our chosen host, however, I'm not sure who to pick for our backup host. HostGator uses cPanel and has the functionality to provide regular full (ie: including configuration) backups of all the sites we host. Ideally I'm looking for a solution where we can provide these backups to another company and within a short period of time they restore all the sites onto their servers and we're back up and running. The whole disaster recover process has to be fairly straight forward from the point of view of what we need to do in case I am unavailable to assist in the disaster recovery process and no one else overly technical is available to assist (ie: take these backup files, send them to this company, and ask them to do this). Any suggestions on which company would be a good choice for this backup solution would be highly appreciated. Thanks, Harry

    Read the article

  • images within noscript

    - by Guilherme Nascimento
    Note: My question is not about javascript Note: My question is how to make the HTML accessible to search engines. Note: My question is not about hiding texts, is on block loading of images in order to use LazyLoad. I tested various techniques of blocking the loading of images to use effect LazyLoad (I'm developing in javascript), was the only efficient <NOSCRIPT>: The HTML structure that would, with LazyLoad loading of images is achieved via the viewport (visible area of the website in browser). <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0101.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0201.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0301.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> This is a bad practice for search engines? If it is a bad practice, you could put an example of good practice? If there is any other issue with noscript talking pictures, forgive me. Note: I did not find any doubts about noscript with images.

    Read the article

  • 1 Google Analytics account or top-level domain + profiles for sub-domains vs. 1 account for each sub-domain

    - by Eric Nguyen
    We have the following websites An online magazine Singapore edition - sg.abc.com The same online magazine Malaysia edition - my.abc.com Forums around the same subjects as the online magazine but functions independently - forums.abc.com Classifieds site rather also around the same subjects but functions independently - directory.abc.com Each of the above websites currently has its own Google Analytics account. abc.com has a separate Google Analytics account too. sg.abc.com has the most traffic and generates most revenues Are there any practical benefits of merging all the above sub-domains to be under abc.com? I can think of more reliable analytics and consistency for sure. Are there more? cross-sales?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >