Search Results

Search found 11896 results on 476 pages for 'smart pro'.

Page 134/476 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Increase traffic to a site through a site on subdomain [closed]

    - by user1716672
    Possible Duplicate: Subdomain versus subdirectory We have two sites, one is mainly a portfolio site (built with Yii framework) and the other is a digital shop (built with open cart) where we sell plugins and themes. The url's look like www.mydomian.com and www.store.mydomain.com. But of these sites are in the same server. We use google analytics tools and have no problem getting traffic to our store. But we have very little to our portfolio site and we want to increase our Google ranking for this site. Assuming increased traffic to our site will increase our google ranking, we were thinking to use URl masking so the link will be www.mydomain.com/shop and this will load www.store.mydomain.com. Will this count as hits for our portfolio site? Because the .htaccess rules will ensure the subdomain is served. So I dont know if these hits will count on our store or our portfolio site...

    Read the article

  • Google webmaster Index Status. Total Indexed=0

    - by hammad
    I previously changed my domain from www.visualstudiolearn.blogspot.com to www.visualstudiolearn.com... i had around 300 posts with the previous domain name and most of them where showing up on Google. Now that i have changed my domain name the index status shows total indexed as 0 and when i go to the advanced tab it says 304(not selected) and 217 blocked my robots. Im really depressed because of this situation. could you please help out???

    Read the article

  • A lot of "(direct) / (none)" traffic in Google Analytics

    - by Yoga
    my web site has a lot of "(direct) / (none)" traffic (over 50%) in Google Analytics, but under the "Audience", 100% are new visitors, why is that? I am quite sure most of the Audience should be new visitor, but why so many "(direct) / (none)" traffic? Update: Actually we have launch a new site which this number drop significantly, so I am interested in knowing why the number was so high in the past.

    Read the article

  • webmaster tools - Network Unreachable

    - by Jayapal Chandran
    Hi, webmaster tools for my site displays that robots.txt unreachable and for all links in sitemap it says network unreachable. sitemap.xml unreachable. These appear in crawl stats page. I discussed with the support team of my hosting and they said... Hi, I have verified apache logs, i cannot see any issues on your website/webserver/ Possible issues. There may the routing issue from the googles server to our server. When a google bots hits goes high the IP will be automatically blacklisted by our firewall to avoid server loads & downtimes. As we donot have access to their services, We cannot able to give details of their details/logs etc. The sitemaps link shows an exclamation mark which means the file was not reachable. What could be the problem and how to solve it?

    Read the article

  • Is my Document Type Definition Still Up to Date?

    - by Sam
    Hi folks, currently all my php generated rich html webpages start with this line, which I have been adding to my tempaltes YEARS ago... <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> Am I still up to date and 2011-proof? or Should I be changing this into something fasterloaden/more agile/more flexible? I have heard of XHTML 1.1. When I remove this line all still works fine. Is it very much needed still? What are my alternatives? Thanks very much for your suggestions.

    Read the article

  • Allow access to WordPress site only by links in email newsletters

    - by Shane
    I send out a personal email newsletter, and have been looking into sending it via some service like MailChimp, or sendy.co. Many of these email services suggest, or require, the email newsletter content to be available online, in case the recipient's email app doesn't render it properly, or at all. The thing is I don't want my newsletter contents visible to the whole world. Nor do I want to require existing recipients to make accounts/be assigned accounts, with passwords. So, the question is: How can my WordPress site content be viewable only by clicking on the link to it in the email newsletter. It can't be found in a Google search; but once at the site the visitor can view previous newsletter contents. It seems an .htaccess file would do the trick, but I have been unable to figure out the syntax for this. Thanks for your help. I have copied below two other questions, and answers, which have helped me word my question clearly. Similar to this request about allowing access to a certain group while still restricting access to the world: Is there a way to password protect directory only in cpanel. But the user should not be prompted the password, when they try to access it via web? This persons question is the closest I could find to my situation: Restrict direct folder access via .htaccess except via specific links

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • Email links open in a new window [closed]

    - by Dan
    I'm asking this as an opinion question. How does everyone treat email links opening in a new window if their default email client is web based? This way? <a href="mailto:[email protected]">email me</a>. It will open fine for app based email clients but open in the same window for web based clients. This way? <a href="mailto:[email protected]" target="_blank">email me</a>. It will open in a new tab for web based email clients but open a blank tab. I cant really seem to find the best of both worlds. What does everyone else do?

    Read the article

  • What's better for SEO for many international markets?

    - by Roy Rico
    Right now, we're working to migrate our company sites for international markets to this scheme www.company.com/[2 letter country codes] www.company.com/uk #for United Kingdom www.company.com/au #for Australia www.company.com/jp #for Japan www.comapny.com/ #for united states, and non identifiable. However, in google webmaster tools, we can geo target each directory, but not the root. If we geo-tag the root with US, all the other markets will inherit. Is it better to move the US market to /us/ or leave it where it is?

    Read the article

  • WordPress bot issues

    - by Paul
    I need to implement a blog into a clients site as he is unhappy with his current basic CMS driven solution. It needs to suit both seo and the current style and as I'm a front end dev/designer and they don't have budget to redevelop - the only solution I can think of is to setup a Wordpress blog and restyle to suit. My only worry about this is the current press reports on WordPress being affected by webbots. I understand the main worry Is if you use an id of admin, but I'm concerned that regardless of this the site could be bombarded with bot requests and cause timeouts! Is this valid? If so is there any way to avoid this issue!? If not can anyone recommend another good SEO friendly blog solution!?

    Read the article

  • Sharing banner on 3rd party websites, concerned about limited resources

    - by Omne
    I've made a banner for my website and I'm planning to ask my followers to share it on their website to help improve my rank. my website is hosted on GAE, the banners are less than 5kb/each and I must say that I don't want to pay for extra bandwidth I've read the Google App Engine Quotas but honestly I don't understand anything of it. Would you please tell me which table/data in this page should be of my concern? Also, do you think it's wise to host such banners, that are going to end up on 3rd party websites, on the GAE? or am I more secure if I use free online services like Google Picasa?

    Read the article

  • Installing an asp application on a dnn server

    - by Cody Henrichsen
    I created a registration db/web application in C# for some workshops. The organization requesting is hosted on a DotNetNuke server. What changes do I need to make to the web.config so it can run under the site. Currently when I try to go to a page it get an error: Server Error in '/' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS.

    Read the article

  • Chrome "refusing to execute script"

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plaintext file? It clearly has a js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still, same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get the Chrome to recognize that the linked file is actually a javascript type?

    Read the article

  • Paypal Automatic Billing API

    - by Dale Burrell
    Paypal offer Automatic Billing Buttons (https://merchant.paypal.com/us/cgi-bin/?cmd=_render-content&content_ID=developer/e_howto_html_autobill_buttons#id105ED800NBF) which allow regular billing for different amounts. After a couple of hours googling I cannot find how to access this functionality using the API, so that it can be automated as opposed to done manually via the paypal account. Is it possible? Can someone point me to a sample/reference?

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • Advice about a website design [closed]

    - by Dimitri
    I am web developer newbie. It doesn't mean that I don't know html/css/javascript but I am not good for web design. I am making a website for friend about a barber shop but I am not totally happy of my work due to lack of design. I would like to have some advice about the website and how can I improve the design? The website is in french because i am french. Here is the website : http://afrostyle92.fr/.

    Read the article

  • Affiliate software to attract incoming customers

    - by Steve
    I am close to starting a new website for a small business which imports products from USA to Australia. The wholesaler says he will allow my client to be the sole distributor for Australia & New Zealand. I'm not sure what CMS or shopping cart software to use yet, but it will need to include an affiliate system to allow advertisers to push customers our way. Do you have any suggestions for robust, flexible affiliate software? Thanks.

    Read the article

  • apache domain redirect to subfolder

    - by Dennis
    I have a hosting account with godaddy. Its a linux system running apache. The way they do their setup is your primary domain is the root folder. When you add a subdomain its in a subfolder of the root which sucks. I want to setup a subfolder structure to organize my domains.. I called godday support and they said to use redirects.. but did not know how to do that.. How its setup now: primary domain: www.domain.com / sub.domain.com /sub I want to create a directory structure and then redirect to each but only show www.domain.com in the url www.domain.com /domain/www sub.domain.com /domain/sub I tried using: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www.)?domain.com$ RewriteRule ^(/)?$ domain/www [L] but it just changes the url to www.domain.com/domain/www Can this be done in htaccess?

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Why is Google Webmaster Tools crawling invalid URLS and showing 500 errors?

    - by Amos Kane
    Google Webmaster tools is reporting 12k+ 500 errors. Eeek! None of the URLS are valid- they all contain www.youtube.com. First, why is Google crawling these URLS if they don't exist? I supplied a sitemap, and they are of course not in the sitemap. I don't have a robots.txt blocking anything. I've checked for invalid redirects--none, and checked for unclosed tags or something that would throw www.youtube.com into the URL by accident--none. In every 'linked from', the referring URL is also a bad URL, with www.youtube.com in it. The Google Tools report no malware, and I can't check the server logs because the host won't give me access. Really stuck!! Any ideas appreciated!

    Read the article

  • What alternatives to Animated GIFs are supported on all modern browsers?

    - by Clay Nichols
    I was looking for an alternative animated images to Animated GIFS. But per CanIUseit support for APNGs seems to being phased out. And MNG support isn't even listed there and pages about it don't even mention Chrome (suggesting those pages are very very old) Clarification: This is for a web app, so it'll need to support: - Safari on iPad (so can't depend on extensions) - Chrome on Windows and Mac - Safari 6.0+ on Mac - Chrome on Android

    Read the article

  • the limit of pageviews per month in Google Analytics

    - by crmpicco
    I have been looking around to try and find some confirmation and clarity on the limit of pageviews that Google allow per month for a Google Analytics account. I have read that the limit of hits per month is 10,000,000, and the limit of pageviews is 5,000,000. Putting 2 and 2 together I am thinking this is to allow the other 5,000,000 for events and social clicks and the like? Google's documentation states 5m, but the hits/pageviews is a bit of a grey area as i've read suggestions that the limit can be considered as 10m

    Read the article

  • Permanent redirect to different domain followed by temporary redirect to folder

    - by Ricardo Amaral
    I have old-domain.com which I want to migrate to new-domain.com. However, the content on the old domain is, well, old. And I'm currently in the process of redesigning my whole site. My idea is to do a permanent (301) redirect from old-domain.com to new-domain.com so that search engines know about the new domain and forget about the old one. But since the content is old I was thinking to do a temporary (302) redirect from new-domain.com to new-domain.com/old/ until the new content/site is ready to be published. Is this, for some reason, a bad idea? Or there's nothing wrong with it? One last thing... If I go with this, what should I do when the new content is ready? Should I just remove the 302 redirect and that's it, or should I do something else to notify search engines that the temporary redirect is over?

    Read the article

  • Amazon Web Services Free Trial: query about get and put requests

    - by abel
    Amazon recently introduced a free tier for its cloud offering. I signed up for AWS and while signing up for the free tier of S3, i found this As part of AWS Free Usage Tier, you can get started with Amazon S3 for free. Upon sign-up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of bandwidth in and 15GB of bandwidth out each month for one year. source:aws.amazon.com , emphasis mine. 20,000 GET requests & 2000 puts mean , 20,000 page views(max) and 2000 file uploads per month. Isn't that lower than what App Engine offers 43,200,000 requests per day.Am I missing some thing, please help.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >