Search Results

Search found 9728 results on 390 pages for 'zee pro'.

Page 152/390 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Should I index my mobile duplicate of my desktop website on google?

    - by Roy
    I have a duplicate of http://he.thenamestork.com with the url http://he.thenamestork.com/mobile - all files are duplicated while the mobile version has a slightly different content. Notice that when I write 'mobile' I only mean regular HTML4 with smartphone friendly CSS. I have a series of redirects (using .htaccess) that allows smartphone users land directly on the mobile versions. But I wonder, shound I index the mobile version as well so those users will be able to get direct, faster links? And what is the proper way of doing that without causing problem in google search? I guess I'm asking if there's a way to get google display regular urls for desktop users and ../mobile/.. urls for smartphone users, and if it is smart SEOwise.

    Read the article

  • How should a site respond to automated login attempts with phony usernames?

    - by qntmfred
    For the last couple weeks I've been seeing a consistent stream of 15-30 invalid login attempts per hours on my site. Many of them are non-sensical usernames that nobody would ever register for real, and often contain typical spam-related keywords. They all come from different IP addresses so I can't just IP block/throttle the requests. I'm not worried about unauthorized access to real accounts since they aren't using real usernames. And if it were a member of my site trying to brute force logins, they could easily scrape the valid usernames from the site, so I'm not worried about that kind of malicious behavior either. But what's the point of this type of activity? What would whichever bot operator is doing this have to gain by attempting all these logins?

    Read the article

  • Error running phusion passenger in standalone mode

    - by msidell
    I'm trying to run standalone phusion passenger so that I can run different ruby rvm configurations on the same host. I already have ruby and passenger running fine on this host. I am following the instructions here. When I run standalone passenger the first time, it appears to successfully install nginx. But then when it tries to run, I get this error: [root@clark directra]# passenger start -a 127.0.0.1 -p 3001 -d --user dweb *** ERROR *** Could not start Passenger Nginx core: nginx: [alert] could not open error log file: open() "/tmp/passenger-standalone.16757/logs/error.log" failed (2: No such file or directory) nginx: [alert] Unable to start the Phusion Passenger watchdog (/var/lib/passenger-standalone/3.0.11-x86-ruby1.9.3-linux-gcc4.1.2-1002/support/ agents/PassengerWatchdog): Permission denied (13) (13: Permission denied) Stopping web server... done FWIW, /tmp is writeable. Any idea what's wrong?

    Read the article

  • Optimizing data downloaded via 'link' media queries and asynchronous loading

    - by adam-asdf
    I have a website that tries to make sensible use of media queries and avoid 'expensive' CSS for users of mobile devices. My eventual goal is to make it 'mobile-first' but for now, since it is based on Twitter Bootstrap it isn't. I included some background images (Base64 encoded) and styles that would only apply to "full-size" browsers in a separate stylesheet loaded asynchronously via modernizr.load. In Firefox (but not webkit browsers) it makes it so that if you navigate away from the homepage and then return, the content (specifically, all those extras) 'blinks' when it finishes loading...or maybe I should say reloading. If, instead of using modernizr.load, I include that stylesheet via a link... in the head with a media query attribute will it prevent the data from being downloaded by non-matching browsers (mobile, based on screensize) that it is inapplicable to?

    Read the article

  • Why am I getting domainpark.cgi being called from my website?

    - by Sean
    I used to test my site on www.exampleone.com and now I have moved to the real domain www.realdomain.com now and www.exampleone.com is now parked by 1and1 (default). Now when I test to see which requests are made by the www.realdomain.comI see domainpark.cgi and park.js from Sedo Parking also being requested as well as the js that serves the ads by adclicks. How do I get rid of this? It's not on the index page at all, and it's causing a lot of strain and slowing my site down.

    Read the article

  • How can one keep an ecommerce site active?

    - by Mantorok
    So, you build an e-commerce site, all your products are on there, but then very little changes which obviously causes your site to become less active, and ultimately not ranking as highly in search engines. Is there anything that can be done to keep it active? I'm aware that inbound links are important and I guess these come over time, are there any other recommended means of keeping the site active?

    Read the article

  • Forum widget for website

    - by Ivan Kuckir
    I have Discussion section at my website and I am getting 3 - 10 comments each week (it may increase in the future). I am using one Disqus "widget" for the whole discussion, but I would like to give it some better structure, with threads, categories etc. Do you know about some "forum widget" service, with functionality of phpBB (threads, categories, ...) and simplicity of Disqus (installing with IFRAME, login with Facebook/Golge, ...) ?

    Read the article

  • How much time it needs google webmaster yo generate content keyword if url masking is enabled? [closed]

    - by user1439968
    Possible Duplicate: What is domain “masking” or “cloaking”? Why should it be avoided for a new web site? my real domain is domain.in. But url masking has been enabled and the masked url is domain2.in .. In that case i have added d url bputdoubts.21backlogs.in to google webmaster a week ago but content keyword hasn't been generated. In this case when can I expect to get the content keywords generated ?? And is there a problem for getting visitors from google search if url masking is enabled ?

    Read the article

  • Tracking URL Goals to an external site from a landing page

    - by Arel
    I have a landing page promoting an iOS app. The page is at vitogo.com. I've set up a goal for When a user clicks on the link to go to iTunes to download the app. I set up a URL destination goal in the property for the site, and can see the goal set up in the reports section. The problem is it isn't tracking any clicks. I've had the goal set up for a while now, and it hasn't tracked anything. Thanks for the help!

    Read the article

  • Google crawling the site but refusing to index dynamic content

    - by Omeoe
    I am trying to get Google to index an AJAX site (davidelifestyle.com). It's crawlable with JavaScript turned off and I have also recently implemented _escaped_content_ snapshot mechanism but all that's indexed is a home page and PDF files that are not directly available from the home page. Also when I use Fetch as Google in Webmaster Tools, it downloads the dynamic page but does not index it ("Submit to Index" just reloads the page). Any ideas what might be wrong?

    Read the article

  • How do I use IIS7 rewrite to redirect requests for (HTTP or HTTPS):// (www or no-www) .domainaliases.ext to HTTPS://maindomain.ext

    - by costax
    I have multiple domain names assigned to the same site and I want all possible access combinations redirected to one domain. In other words whether the visitor uses http://domainalias.ext or http://www.domainalias.ext or https://www.domainalias3.ext or https://domainalias4.ext or any other combination, including http://maindomain.ext, http://www.maindomain.ext, and https://www.maindomain.ext they are all redirected to https://maindomain.ext I currently use the following code to partially achieve my objectives: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="CanonicalHostNameRule" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^MAINDOMAIN\.EXT$" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="https://MAINDOMAIN.EXT/{R:1}" /> </rule> <rule name="HTTP2HTTPS" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="off" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="https://MAINDOMAIN.EXT/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> ...but it fails to work in all instances. It does not redirect to https://maindomain.ext when user inputs https://(www.)domainalias.ext So my question is, are there any programmers here familiar with IIS7 ReWrite that can help me modify my existing code to cover all possibilities and reroute all my domain aliases, loaded by themselves or using www in front, in HTTP or HTTPS mode, to my main domain in HTTPS format??? The logic would be: if entire URL does NOT start with https://maindomain.ext then REDIRECT to https://maindomain.ext/(plus_whatever_else_that_followed). Thank you very much for your attention and any help would be appreciated. NOTE TO MODS: If my question is not in the correct format, please edit or advise. Thanks in advance.

    Read the article

  • How can a domain use its own nameservers?

    - by Thomas Clayson
    I have to change the MX DNS records for our company domain name and I've come across this odd situation: A whois search shows up that the nameservers for ourcompany.com are ns1.ourcompany.com and ns2.ourcompany.com. In the DNS settings at the registrar there are no A/Cname records at all. However the nameservers are defined in the DNS settings for the domain on our dedicated server. (Registrar and host are two different companies). Using the DNS lookup on http://www.mxtoolbox.com/ shows that ns2.ourcompany.com is reporting the correct IP for our dedicated server. Its all very odd... the DNS on the dedicated server doesn't seem to have much effect, but its odd that the dns at the registrar's end doesn't have any records. thanks for your help.

    Read the article

  • Does the google crawler really guess URL patterns and index pages that were never linked against?

    - by Dominik
    I'm experiencing problems with indexed pages which were (probably) never linked to. Here's the setup: Data-Server: Application with RESTful interface which provides the data Website A: Provides the data of (1) at http://website-a.example.com/?id=RESOURCE_ID Website B: Provides the data of (1) at http://website-b.example.com/?id=OTHER_RESOURCE_ID So the whole, non-private data is stored on (1) and the websites (2) and (3) can fetch and display this data, which is a representation of the data with additional cross-linking between those. In fact, the URL /?id=1 of website-a points to the same resource as /?id=1 of website-b. However, the resource id:1 is useless at website-b. Unfortunately, the google index for website-b now contains several links of resources belonging to website-a and vice versa. I "heard" that the google crawler tries to determine the URL-pattern (which makes sense for deciding which page should go into the index and which not) and furthermore guesses other URLs by trying different values (like "I know that id 1 exists, let's try 2, 3, 4, ..."). Is there any evidence that the google crawler really behaves that way (which I doubt). My guess is that the google crawler submitted a HTML-Form and somehow got links to those unwanted resources. I found some similar posted questions about that, including "Google webmaster central: indexing and posting false pages" [link removed] however, none of those pages give an evidence.

    Read the article

  • What is the advantage to hosting static resources on a separate domain?

    - by Michael Ekstrand
    I notice a lot of sites host their resources on a separate domain from the main site, e.g. StackExchange using sstatic.net, Barnes & Noble using imagesbn.com, etc. I understand that there are benefits to putting your static resources on a separate host, possibly with an efficient static-file web server like nginx, freeing up the main server to focus on serving dynamic content. Similarly, outsourcing to a shared CDN like cloudfront Akamai is logical. What is the benefit to using a separate domain otherwise, though? Why sstatic.net instead of static.stackexchange.com? Update: Several answers miss the core question. I understand that there is benefit to splitting between multiple hosts — parallel downloads, slimmer web server, etc. But what is more elusive is why multiple domains. Why sstatic.net rather than static.stackexchange.com as the host for shared resources? So far, only one answer has addressed that.

    Read the article

  • when google search gives incorrect results - how can it be reported?

    - by vgv8
    If google search query results are incorrect: How can it be reported? What is the procedure to correct it? @Lèse majesté: Incorrect results are the results that do not contain any of the searched keywords in them like in this my question @John Conde, yes I believe it is the right defitition. @DisgruntedGoat, even when there are a lot of results by keycaptcha for "Past 24 hours", the Google.com presents results only on reCAPTCHA. Anyway, they are different from those by google if to search by "keycaptcha" (in quotes) and by other search engines. Everybody thinks that searches by one keyword should be sneakily substituted by google's own promoted brand products?

    Read the article

  • What to do when product range evolves and site name does not reflect this?

    - by nitbuntu
    Suppose, just as an example, I have a website with domain www.gifts-for-dogs.com.....but after a few years I start selling stuff for Cats and Fish. I may not keep enough of a range of products for these other type of pets yet, so can't justify changing the domain name and logo (to something like gifts-for-pets.com) just yet....but envisage that I eventually may have to in the not too distant future. What would be a good strategy here and what are the steps I would have to consider before making these changes?

    Read the article

  • Share links with <script src=""> SEO

    - by gansbrest
    Hi, I would like to create a share link to my website using javascript: script src="[url-to-my-script]" Basically the main idea behind this is to render HTML block with an image and link to the website. By using JavaScript approach I can control the look and feel of that link + potentially I could change the link html in the future without touching partner websites. The only potential problem I see with this approach is SEO. I'm not sure if google will be able to index that link to my website, since it's generated by javascript.

    Read the article

  • Best method to do A B testing across to subdomains

    - by Lior
    I want to do an A B test of an entire site for a new design and UX with only slight changes in content (a big brand site that has good Google rankings for many generic keywords. My idea of implementation is doing a 302 redirect to the new version (placing it on www1 subdomain) and allowing only user agents of known browsers to pass. The test version will have disallow all in the robots text. Will Google treat this favorably or do I have to use Google Website Optimizer (which will give me tracking headaches)?

    Read the article

  • How to properly URL/domain forward

    - by NRGdallas
    No clue on a title for this, someone feel free to suggest an edit. I have a client that has a website. He owns around 200 domains, and wants each domain to contain content from the main website. The header, footer, and navigation bars will remain the same for each domain, but the actual page content will vary (obviously duplicate content issues, open to suggestions) He wants each individual page to be its own separate domain, rather than a url within the main domain. (page1.com page2.com etc - NOT site.com/page1.html, however the file is actually hosted at site.com/page1.html - all links will direct to site.com/whatever accordingly) What would be the best place to start reading / learning on how to do this, and what concerns/considerations should be taken into mind?

    Read the article

  • How to use different php.ini files for different VirtualHosts?

    - by gsingh2011
    I have my site and it's staging subdomain running on the same CentOS machine running apache. The subdomain is created using a VirtualHost, and I use it to find any bugs before I push to production. I want the php.ini file for the staging VirtualHost to be a development one, and the production site will use a production php.ini. How can I configure apache to use different php.ini files? I don't want to use php_value/php_flag for everything, I'd rather just use the php.ini file I already have available. I've tried creating an .htaccess file that looks like this, SetEnv PHPRC /path/to/php.ini/directory This has no effect, as phpinfo() tells me it's still using /etc/php.ini. I've also tried setting PHPIniDir for both virtual hosts (www and staging) and it complains about seeing the directive twice.

    Read the article

  • Tracking referrals between profiles on the same domain in Google Analytics

    - by doctororange
    I have a website at mydomain.com that uses Analytics. I have a blog that resides at mydomain.com/blog/, which also uses Analytics They are on different profiles. The main site uses something like: _gaq.push(['_setAccount', 'UA-XXXXXXXX-6']); While the blog uses: _gaq.push(['_setAccount', 'UA-XXXXXXXX-7']); _gaq.push(['_setCookiePath', '/blog/']); My issues is that this seems not to track referrals from the blog through to the main site when, for instance, the logo which links to the main site is clicked. Ideally, I would like the clicks of this logo to report that the source was mydomain.com/blog/, but because they are at the same domain they seem to register as direct traffic. Have I missed a step in my configuration, or will I have to resort to linking to something like mydomain.com?ref=blog? Thank you.

    Read the article

  • Cropping images & SEO

    - by user1181950
    So I have a page with a bunch of images with largely varying sizes. Also the layout of the page is such that the images are all in the shape of square tiles, so just resizing will cause distorted images. What I've been doing previously is when users upload images, I resize and crop them appropriately and display the new image as the thumbnail and load full image when user clicks on it. However, I just realized this is an issue with SEO as google will crawl the thumbnails and stick the thumbnails on Google Images instead of the full images. Is there any way to show a cropped/resized image but have Google Image show the full image? I can do something with css using an enclosing div and overflow:hidden, but I'd imagine the performance on that would be pretty bad. Any suggestions? Thanks! PS. I saw this (Make google index the actual image not the thumbnail), but in my case I have users continuously uploading images, and the database of images is always changing and pretty big (thousands), so sitemap will be pretty unwieldy..

    Read the article

  • How do I fix the Google Webmaster Tools warning: "URL not followed?"

    - by user3611500
    A few days after submitting my sitemap to Google, I received this warning: When we tested a sample of URLs from your Sitemap, we found that some URLs redirect to other locations. We recommend that your Sitemap contain URLs that point to the final destination (the redirect target) instead of redirecting to another URL. The example URL Google gave me is http://iketqua.net/?_escaped_fragment_=CIDTKT/mien-trung/xo-so-kon-tum I checked all possible things that I could think of, but still can't figure out what the warning is about! My sitemap: http://iketqua.net/sitemap.xml

    Read the article

  • redirect subdomain(weblog) to new domain that can't access .htccss 301

    - by fafa
    I've a problem that I can't find the solution of it in web. I have a blog that has PR 1 and it's subdomain "aaaa.domain.com" that "domain.com" is a blog server. and now i buy a domain "newdomain.com" i want tell google webmaster to redirect old subdomain to this new domain and load trafic to my new domane. but i can't access .htccss to redirect 301. all thing that i can do is put html code in the html . now how can i do this. when i use "Change of Address" in goole webmaster it say:"Restricted to root level domains only" . sorry for bad English.

    Read the article

  • Track url from Amazon S3 using Google Analytics

    - by morktron
    I couldn't find any decent pay per view video solutions for low budget clients. So I'm considering using a membership extension with Joomla and hosting the video with amazon S3. The only issue is that once someone has signed up to view or download the video if they have any web development experience they will be able to get the url of the video and freely publish it on the web. How can this be prevented? It looks like it can be done using IAM User Temporary Credentials - AWS SDK for PHP but the client would prefer not to have to pay someone to spend hours writing custom php code to get this to work. With Amazon s3 I could at least check the log files I guess to manually monitor the url but is there a way to track the url with Google Analytics? or is there a more elegant solution?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >