Search Results

Search found 11896 results on 476 pages for 'smart pro'.

Page 152/476 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Google analytics: how many visitors have visited n times?

    - by Riley
    I'm trying to guess how many loyal users I have by counting the number of people that have visited the site 10 times. How can I answer this question with Google Analytics? "Visitor Loyalty" is a tempting answer, but the label for loyalty is "Visits that were the visitor's nth visit," and I want something more like "Visitors that visited n times." For example, we have 40 visits in the "51-100" visit range, but I think that could be a single user who visited 91 times. Or two users who visited 71 times each. The whole chart makes a good logic puzzle (I wonder if there's a unique solution) but doesn't easily answer the question I have.

    Read the article

  • how to get new site indexed by alexa

    - by JohnMerlino
    When I search my site on alexa, it says "Alexa Traffic Rank: No Data". So I google the issue. I come across this page: http://www.loudable.com/my-website-data-not-showing-in-alexa-get-your-website-crawled-by-alexasolution.html It says to get site indexed, click Crawl my site on the webmasters page. However there is no longer a link that says "Crawl my site". So as of now, does anyone know how to get site indexed by alexa so that my traffic rank will display on alexa index?

    Read the article

  • Will using two different tracking codes affect my SERP

    - by Danny Hefer
    Hello everyone and thanks for your time! I am now facing a problem after a site migration. New site is basically an improved version of old site, with the same content and some extras. After pointing the domain name to the new site, the old site was still online for a while but didn't get any traffic. The new site has its own tracking code. So, old tracking code has age (something like 7 years) but no visitors for a month, but new tracking code is a month old with an acceptable traffic. How to you think google will react if I add old tracking code to new site? Thanks by advance!

    Read the article

  • mod_rewrite works within directory not on root

    - by Anvesh Saxena
    I am having problem in my RewriteRule for the tags portion. What I am able to debug is that the rule is been triggered at least because the page "tags.php" is been rendered but without the URL parameters. This .htaccess file with the rules is within root for my sub-domain and has following content for tags postion. # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ tags.php?tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ tags.php?tag_name=$1 RewriteRule ^tags/?$ tags.php?tag_name= Another problem that I ain't able to debug is that the similar .htaccess file exists for a directory within my sub domain and is working as expected with the necessary URL parameters also been available. The .htaccess file within the directory reads as follows # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ restAPI.php?type=tags&tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ restAPI.php?type=tags&tag_name=$1 RewriteRule ^tags/?$ restAPI.php?type=tags&tag_name= Could anyone point me the problem that I might be having in my Rewrite rules, I am also facing Internal server error sometimes which I am second guessing is due to the linked problem. Note:- I have Apache version 2.2.23 on my shared hosting.

    Read the article

  • Using AdSense to show ads to logged-in users

    - by John
    I know that you can grant authorization permissions to Google AdSense so that it can 'log in' and see what other logged in users can see (e.g. in a private forum), so that the ads it displays are better targetted. Extending this principle further: I am making a site which will show completely different content for each individual user (i.e. not 'common' content like a forum in which everybody sees essentially the same thing). You could think of this content as similar to the way each Facebook user has a different news feed, but it is the 'same' page. Complicating things further, the URLs for this site will be simple, e.g. '/home' and '/somepage', and will not usually include unique identifiers to differentiate between users (e.g. '/home?user=32i42'). My questions are: Is creating an account purely for AdSense to log in to the site with worth it in this case, seeing as it will be seeing it's own 'personalized' version and not any other user's? More importantly: is that against the Google AdSense Terms of Service? (I can't seem to figure that one out) How would you go about this problem?

    Read the article

  • Is it needed to have your blog title and description in H1 and H2

    - by Saif Bechan
    I have read an article that states that it is not necessary to have your blog title and description on your website at all. Just have the titles of the posts in h1, on the index and the post page. And on the post page have your different sections started with h2. Widget headers start with h3. Title and description are most of the time in the logo image. I have looked at the source of my favorite blog, http://net.tutsplus.com, and I see they do the same. Is this recommended?

    Read the article

  • Is there a way of listing files for a directory if it contains index.html?

    - by fredley
    On my server (over which I have little control), directories are listed by default, so for mysite.com/images I get: Index of /images Parent Directory BirdsAreHere.png CanYouSpot-AdBlank.jpg etc. Is putting an index.html in that directory enough to prevent people listing the files, or is there still a way of getting at that list? Is it the same for my web root directory (mysite.com)?

    Read the article

  • Grid Favicon on [Scammy] Websites [closed]

    - by Kevin Dolan
    I've seen this grid favicon show up on a lot of sites, most of which tend to be for scams like oxytocin accelerator, or make $300 a day posting links to Google type sites. My question is: what is this icon and where does it come from? Is there some organization whose goal is to make terrible websites like this and they associate them with this icon or does it belong to some server software that for some reason scammy sites like to use? Does anybody know the origins of this icon?

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

  • Best way to track multiple sites with Google Analytics

    - by stevether
    I currently have 63 websites (and counting) that I'm tracking on one Google Analytics account, and I'm starting to realize... this is becoming a bit cumbersome. What's the best way to collect traffic data in bulk? Are there other resources out there that are better suited for this task? Does Google offer a bulk option for this kind of thing? Would it be better to make separate analytics accounts? I'm just wondering if anyone else has had found a better solution that manually setting up all these accounts/setting up the tracking codes etc, when it comes to large scale management.

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • Google page events monitoring and analysis

    - by Homunculus Reticulli
    I have read the Google page event documentation, but I am not sure I understand it correctly. I am new to Google analytics, and I have two questions: Once I have google analytics enabled for my site (i.e. I have inserted the tracking code in my pages etc), do I need to set anything else up (at the Google end - i.e. in my Google analytics account) It is not clear to me how the event data particularly, relating to how the data can be aggregated and analyzed. For instance, if I want to track an event under category category for click action action, I will use the following code snippet: <a href="some-uri.htm" onclick="_gaq.push(['_trackEvent', 'category', 'action', 'label']);">Do Something</a> For the sake of simplicity, lets say I am interested in monitoring click events in my header and footer, and I want to find which pages the header and or footer is clicked most often. How would I set things up so that I can analyze the header/footer clicks aggregated at the page level?

    Read the article

  • Forward .html/.htm to .php with .config

    - by PhilipK
    I'm moving a site from my linux hosted server to a client's windows hosted server. The .htaccess file no longer works and I'm told that windows servers use .config . How can I forward all users accessing .html & .htm files to the equivalent .php file. Server Info... OS/Hosting Type: Windows / Shared Hosting .Net Runtime Version: ASP.Net 2.0/3.0/3.5 PHP Version: PHP 5.2 IIS Version: IIS 7.0 Data Center: US Regional EDIT *Hosting provided by GoDaddy Was told by a friend following should work but it has no effect on the site. <configuration> <system.webServer> <handlers> <add name="PHP-FastCGI" verb="*" path="*.html" modules="FastCgiModule" scriptProcessor="c:\php\php-cgi.exe" resourceType="Either" /> </handlers> </system.webServer> </configuration>

    Read the article

  • Foolproof way to ensure Google news pulls the correct image for it's thumbnails?

    - by Anthony
    Google news results have an acompanying thumbnail next to articles that show up in the results. If google's crawler can't find a thumbnail to pull from our site, it uses its next best guess from another site, therefore linking the image to another site but still uses our headline. Example: Headline from Reuters, Image from Livemint: Our pages absolutely have images, they are not massive in file-size or dimensions, yet we are not having them pulled / crawled correctly. We have read up on the suggestions from google, and from others around the web and nothing is panning out. Has anyone had any experience where they can ensure google news will pull a thumbnail of our choosing?

    Read the article

  • Fetch as Google error 403

    - by Bojan Vidanovic
    2 weeks ago, google cant access my website anymore, in webmaster tools i cant fetch any page, i always get error 403, and the website has been completly disapperard form the google search results. I cant figure how suddendly it cant see it anymore, i've checked .htaccess and there nothing that blocks google crawlers, and robots.txt is fine to. Anyway the site is accesibly normaly for users. Anyone had this problems? please help!

    Read the article

  • My domain PageRank shows as unavailable, why is that?

    - by Emerson
    My domain, http://www.anovaordemmundial.com , has been snatched by some opportunist when I failed to renew the domain. I know, it's all my fault :/ . After I have being ripped off and bought my domain back, and everything is configured and working, the pagerank for that domain shows as unavailable. Also searches for "nova ordem mundial" (in portuguese), which used to show my domain as the first result in searches in any language, now don't show it anymore. Do you think this is something temporary and it will recover its pagerank after a full crawl by google? There exists hundreds of sites pointing to my domain, that is why I got the previous relevance in searches. The domain is back for more than 5 days already. In reality, bing already Is there anything I can do to help get my domain back to its pagerank??? Thanks for the help!

    Read the article

  • Google's Opinion on Javascript Page Refresh

    - by user35306
    I was wondering if anyone knows Google's view on this. My company has a homepage that features a lot of 3rd parties on it and it needs to inform customers which ones are currently online, which aren't, and which are currently busy. Because this constantly changes, we have the homepage refresh to show the most relevant and up-to-date content to our users. I'm not using a meta refresh element in the http-equiv parameter to do this. Instead I have this js element to refresh the page: window.setTimeout("refreshPage()", 120000); I just want to know whether people think Google might consider this a violation of the content guidelines or not. Or if it's not an outright violation, then at least if Google frowns on this or not. It doesn't redirect the user to a different page or anything, just refreshes the page so that they can see the most relevant content.

    Read the article

  • How to SEO Optimize Javascript Image Loader?

    - by skibulk
    I am building an image-centric catalog website. It catalogs collectible gaming cards numbering 100,000+ pages. Competitor sites recieve millions of hits each month, so with the possibility of excessive traffic, I need to moderate image bandwidth while also optimizing for image SEO. I'm looking for some tips on doing so. Each page on the site features one card with appropriate tags and descriptions. There are however four images for each card - one on matte cardstock, one on foil cardstock, one digital, and one digital foil. In a world with unlimited bandwidth and no-wait page loads, I'd simply embed all four images on the main product page with titles, alt tags, and captions to rank them according to their version keyword. In reality a javascript gallery image loader seems appropriate. Here is a simplified example of my current code. Would this affect SEO in any way? Should I be doing anything differently? Note that I don't want to create a page for each image as I'd have to duplicate the card tags and descriptions on each one, diluting PR for the main page. Thanks for any insight! <script type="text/javascript"> document.write(' <img src="thumbnail1.jpg" data-src="version1.jpg"> <img src="thumbnail2.jpg" data-src="version2.jpg"> <img src="thumbnail3.jpg" data-src="version3.jpg"> <img src="thumbnail4.jpg" data-src="version4.jpg"> '); </script> <noscript> <img src="version1.jpg"> <img src="version2.jpg"> <img src="version3.jpg"> <img src="version4.jpg"> </noscript>

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Google Cache showing wrong URL

    - by Sathiya Kumar
    I searched the cache details of the URL http://property.sulekha.com/pune-properties but the Google Cache showing details for property.sulekha.com. I don't know why it's showing like this. Not only for http://property.sulekha.com/pune-properties but also for all the Indian city relates URL's like http://property.sulekha.com/chennai-properties , http://property.sulekha.com/mumbai-properties , http://property.sulekha.com/kolkata-properties etc. Even i don't find these urls in the Google search result. If i search Chennai properties in Google, i find property.sulekha.com and not http://property.sulekha.com/chennai-properties . Why its happening like this? Please let me know

    Read the article

  • How to test robots.txt in googlebot to find out what is being indexed

    - by Amar Jarubula
    This question is a continuation for this answer How to check if googlebot will index a given url? As was told I did go to the Webmaster Tools and tested contents of my robots.txt file. However this is just giving me the info if that content is good enough or not. However for my scenario I need to test whether disallowing some patterns is being indexed or not. For example I have something like this below in my robots.txt disallow:/pattern* My understanding is the URLs with word pattern should not crawled, but how do I test this pattern is enforced while indexing the website?

    Read the article

  • Removing 301 redirect from site root

    - by Jon Clements
    I'm having a look at a friends website (a fairly old PHP based one) which they've been advised needs re-structuring. The key points being: URLs should be lower case and more "friendly". The root of the domain should be not be re-directed. The first point I'm happy with (and the URLs needed tidying up anyway) and have a draft plan of action, however the second is baffling me as to not only the best way to do it, but also whether it should be done. Currently http://www.example.com/ is redirected to http://www.example.com/some-link-with-keywords/ using the follow index.php in the root of the Apache2 instance. <?php $nextpage = "some-link-with-keywords/"; header( "HTTP/1.1 301 Moved Permanently" ); header( "Status: 301 Moved Permanently" ); header("Location: $nextpage"); exit(0); // This is Optional but suggested, to avoid any accidental output ?> As far as I'm aware, this has been the case for around three years -- and I'm sorely tempted to advise to not worry about it. It would appear taking off the 301 could: Potentially affect page ranking (as the 'homepage' would disappear - although it couldn't disappear because of the next point...) Introduce maintainance issues as existing users would still have the re-directed page in their cache Following the above, introduce duplicate content Confuse Google/other SE's as to what the homepage actually is now I may be over-analysing this but I have a feeling it's not as simple as removing the 301 from the root, and 301'ing the previous target to the root... Any suggestions (including it's not worth it) are sincerely appreciated.

    Read the article

  • redirect an old URL using web.config

    - by Dog
    i'm still very new to using URL rewrites and redirects and i'm having some problems on something i thought was quite simple... i've just rebuilt a website and want to redirect the old URLs to the new ones... for example : http://www.mydomain.com/about.asp?lang=1 should now be http://www.mydomain.com/content.asp?id=100230&title=about&langid=1 unfortunately, everything i've tried is giving me errors or simply does nothing. here is one rule i tried : <rule name="redirectoldabout" enabled="true" stopProcessing="true"> <match url="( .*)" negate="true" /> <conditions> <add input="{HTTP_HOST}" pattern="^mydomain\.com/about\.asp\?lang=1$" /> </conditions> <action type="Redirect" url="http://www.mydomain.com/content.asp?id=100230&title=about&langid=1" redirectType="Permanent" /> but i get an error back : HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. any suggestions as to what i'm doing wrong? thanks dog

    Read the article

  • No databases showing in phpMyAdmin

    - by Thein Hla Maw
    My website is hosted in shared hosting service and is working fine with updated news stored in MySQL database. To manage the database of website, I install phpMyAdmin in a sub-folder with the same username and password used in website. When I login to phpMyAdmin, I don't see my database. phpMyAdmin is showing "No databases" in left pane. Is there any thing I need to configure in phpMyAdmin? Edited: This is the settings in config.inc.php. I can login to phpMyAdmin successfully. $cfg['Servers'][$i]['host'] = 'hostname'; $cfg['Servers'][$i]['port'] = ''; $cfg['Servers'][$i]['socket'] = ''; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['user'] = 'dbuser'; $cfg['Servers'][$i]['password'] = 'password';

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >