Search Results

Search found 9728 results on 390 pages for 'zee pro'.

Page 178/390 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Still no detected structured data in Google Webmaster Tools [on hold]

    - by user6211
    Can you give me some suggestions what's wrong with my structured data? Google still cannot read it. It looks like this: <div class="identity"> <div itemscope itemtype="http://schema.org/LocalBusiness"> <a itemprop="url" href="http://MYDOMAIN.co.uk/"><div itemprop="name"><strong>MY_COMPANY</strong></div></a> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">MY_ADDRESS</span>, <span itemprop="addressLocality">London</span>, <span itemprop="postalCode">SE5 MY_XYZ</span>, <span itemprop="addressCountry">UK</span> </div> </div> </div>

    Read the article

  • best/simplest way to inform search engine of sitemap location

    - by Don
    AFAIK, there are 2 ways to make search engines aware of a sitemap's location: Include an absolute link to it in robots.txt Submit it to them directly. The relevant URLs are: http://www.google.com/webmasters/tools/ping?sitemap=SITEMAP_URL http://www.bing.com/webmaster/ping.aspx?sitemap=SITEMAP_URL Where SITEMAP_URL is the absolute URL of the sitemap. Currently, I do both. Regarding (2), I have a job that runs automatically every day which submits the sitemap to Bing and Google. I don't think there's any reason to do (1) and (2), but I'm paranoid, so I do. I imagine you can avoid both (1) and (2) if you just make your sitemap accessible at a conventional URL (like robots.txt). What's the simplest and most reliable way to ensure that search engines can find your sitemap?

    Read the article

  • How to choose, set and use keywords while structuring a website?

    - by mechdeveloper
    I have been working on my personal website for sometime, I think I have been doing a good technical job, but, unfortunately I did a terrible job while structuring the website because I didn't care about the keywords I was going to use. Although it is my personal website, I'd like to mention the main objective is the blog of the website, so I'd like that the keywords were related to the content that it is in the blog, at present google webmaster tools is displaying a lot of keywords that has nothing to do with the content of the website, and some SEO reporting websites such as woorank says that the keyword optimization of the website is awful, So I have 3 questions: How to choose, set and use keywords while structuring a website? OPTIONAL: which are all the methods and sources used by search engines to collect the keywords of a website? there are some high profile websites that aren't optimized on this as well, should I concerned about this anyway?, is there anything more important that I should be concerned about? (if you want to see the website please check my profile)

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

  • Leveraging a hosted web font service from a local development server?

    - by Tom Auger
    There are a number of popular web font services on the market today who "host" the fonts and serve them to your web page via javascript or CSS pointing to remote locations. For example http://webfonts.fonts.com or http://typekit.com However, there seems to be an issue when you're developing on a local testing server - the remote font services don't validate the font and return 403 access denied errors and the like. What workarounds are there for using remote services such as a hosted font service, on a local development server?

    Read the article

  • Broken Links in different browsers

    - by kdorival
    Hi I'm having problems with our website, http://www.accessiblehomehealthcare.com, which is a wordpress 2.7 (version). All of a sudden our RSS links broke on the right side, which has happened before and I fixed it within 5 mins. Now, when I fix it, it doesn't look right in different version of I.E. or Firefox, I have I.E. 8 and Firefox 3.6.15 and it looks good for the most part, but there are a few parts where the links are broken. One browser the links would look ok but go to another page and the links or logos would be broken. Certain parts of the website should be static(identical) to the other pages of the site, but if a link is broken on one page, its perfect on another page. I was wondering was there a secret code for wordpress to keep the sites compatible with all browser versions or is there a bigger issue???? Any help or suggestions will help???

    Read the article

  • Trouble with .htacess redirection

    - by mike23
    I use this redirect rule to redirect users from www.domain.com/admin to www.domain.com/wp-admin on a Wordpress site. RedirectMatch 301 \@admin http://www.domain.com/wp-admin The problem is that instead redirecting to wp-admin/, it redirects to an article called Administrators are awesome people (slug : administrators-are-awesome-people) I can guess what is going on, WP sees that there is an article slug starting with "admin", and redirects to it, overruling my own rule. Is there a way to be more specific, like saying "redirect urls that end with exactly admin ?

    Read the article

  • Will this sitemap get me de indexed from Google?

    - by heavy rocker dude
    My site's URL (web address) is: http://quantlabs.net/private/sitemap.xml Description (including timeline of any changes made): Will this sitemap get me de-indexed from Google? My new site map just got spidered by Google for some reason. It is located at http://quantlabs.net/private/sitemap.xml, is this in danger of getting me de-indexed from Google's index. Does it look like spam even though it is not meant to be? I am trying to figure the limitation in terms of Google's threshold before they deem it a spammy sitemap. This is sitemap contains automated postings which are different with the stock symbol provided. The amount of postings within the Sitemap are quite a few in a small amount of time.

    Read the article

  • Newsletter link to share a webpage on facebook - Facebook is not accepting it.

    - by donaldthe
    My question is what link should I use to enable a 1 click share on facebook of a webpage from an external application such as an email? Thanks Here are the details: User submits content to my website. Content is first reviewed by webmaster Webmaster sends out congratulations email, "Your article has been published, here is the link: http://www.mywebsite.com/new-page.html" I would like to include in the congratulations email, "Click on this link to share your article with your friends on facebook" The link I am using is the following: http://www.facebook.com/share.php?u=http://www.mywebsite.com/new-page.html If I paste it into the address bar and I am logged in it works fine and facebook parses the new page for meta data and displays Page title and description. However, when clicking on the link from my newsletter, I am redirected to the login page, even though I am logged in, and after logging in, I am sent to my profile page and the share is not posted.

    Read the article

  • Keeping files private on the internet (.htaccess password or software/php/wordpress password)

    - by jiewmeng
    I was asked a while ago to setup a server such that only authenticated users can access files. It was like a test server for clients to view WIP sites. More recently, I want to do something similar for some of my files. Tho they are not very confidential, I wish that I am the only one viewing it. I thought of doing the same, Create a robots.txt User-agent: * Disallow: / Setup some password protection, .htpasswd seems like a very ugly way to do it. It will prompt me even when I log into FTP. I wonder if software method like password protected posts in Wordpress will do the trick of locking out the public and hiding content from Search Engines? Or some self made PHP script will do the trick?

    Read the article

  • Should I use nodindex, follow or rel canonical?

    - by webmasters
    I have a site that lists offers, promotions from other websites. Since the offers expire rather quickly I don't save them into my database. I see no point in having a page from 2010 about 30% discount on a certain brand of shoes which isn't availabe anymore. A visitor enters my website; He clicks on the "shoes" category; http://www.mysite.com/shoes/ Here he sees 20 available promotions from different online stores. He clicks on a promotion and gets to a page like this: http://www.mysite.com/shoes/promotions/prada Questions: I use the template promotions.php and list all the promotions. /promotions/prada/ /promotions/otherbrand/ .... What I do is use "noindex, follow" for the links. Is that a good idea? Or should I use rel="canonical" for the promotion page? How do you advise me to handle this from the SEO point of view?

    Read the article

  • Are there any adverse side effects to loading html5shiv in every browser?

    - by Jeff
    On the html5shiv Google Code page the example usage includes an IE conditional: <!--[if lt IE 9]> <script src="dist/html5shiv.js"></script> <![endif]--> However on the html5shiv github page, the description explains: This script is the defacto way to enable use of HTML5 sectioning elements in legacy Internet Explorer, as well as default HTML5 styling in Internet Explorer 6 - 9, Safari 4.x (and iPhone 3.x), and Firefox 3.x. An obvious contradiction. So to satisfy my curiosity, for anyone who has studied the code, are there any adverse side affects to loading html5shiv in every browser (without the IE conditional)? EDIT: My goal, obviously, is to use the shiv without the IE conditional.

    Read the article

  • setting up freedns with an existing domain

    - by romeovs
    I've been running a webserver off of a pc at a static IP succesfully for the past 5 months. recently however, I've moved into another appartment and my ISP only provides a dynamic IP (my IP changes from time to time). I'm not an internet genius but I was thinking to fix this by using a Dynamic DNS provider. So I got on the web and found freedns. I'm a bit confused about how to set up everything though. I've managed to succesfully install the IP updater daemon on my web server. Then, in my registrars control panel, I set the NS records to point at ns1 through ns4.afraid.org (removing the old NS records). I'm not certain what I should do with the A records though (for now they are still pointing to the old static IP address). I have A records for www, blog, irc, etc. but I cannot point them at my new IP address, because it isn't Could someone explain this in the clearest possible sense (perhaps elaborating on what happens at each step of the DNS process). I never really knew what the A records are for anyway. (note that I haven't really found any documentation at the freedns website, or on google)

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Tips for managing internal and external links using WordPress [closed]

    - by keruilin
    So I'm looking for ways to optimize my site for user and search engine purposes. I've read several articles and looked at several different plugins. To say the least, I'm thoroughly confused as what are the best practices for managing internal and external links. Here is a list of some of my questions: Which internal links should be set to "nofollow"? Which external links should be set to "nofollow"? To what degree does actively managing links contribute to your PR? Should you use "nofollow" blindly on all links in comments? If a link to an external site is broken (404 or whatever), should you "nofollow" that link? What about "noindex"? As you can see, lots of questions. I'm hoping that you experienced webmasters can give a newb some best-practice advice.

    Read the article

  • Wordpress Website issue [duplicate]

    - by David
    This question already has an answer here: What are the best ways to increase a site's position in Google? 18 answers I have my website in WordPress. Now the problem is if I search any keywords in Google related to website webpages then it doesn't show any webpage result in web results. But if I search in Google blog result then It is showing my webpages in Google blog results. I want to know what is problem with my webpages. Why they are coming in Google blog search instead of Google web search?

    Read the article

  • Glue Records creation

    - by FFrewin
    I need some information on the following issue, as I would like to have it clear on my mind. I have a VPS server. All my sites hosted on this VPS are using as NameServer .gr domain, like ns1.greekdomain.gr & ns2.greekdomain.gr . The .gr domain name is a domain I own with a greek registar. Now, I want to move 2 websites with .co.uk domain names to my VPS. The co.uk domain names are registered with a UK based registar. When I went in the domain management panel, I did changed the nameservers of my domains to my ns.greekdomain.gr ns. However the panel returns an error about invalid nameservers. After digging, I found that my nameservers are not valid because they do not exist as records in the .co.uk registry. And here it starts my big trouble. The .co.uk registart tells me that I have to ask my hosting provider / .gr registar to create a new record to the .uk registry for my nameservers. The .gr registar tells me that my uk registar needs to create a new record for my ns. From Nominet (.co.uk) registry, the one employee tells me that I need to ask my uk registar, the other employee (seemed to not understand what I was asking) told me that they cannot change my nameservers for me, and she told me to contact anyone else (old hosting provider, uk registar, .gr registar) to help me with that. I can't find help from nobody. I try since the last week to transfer my websites to my VPS and I can't. So, the question is who is responsible and who is able to create glue records for my nameservers ?

    Read the article

  • What is a good network for full-page rich ads?

    - by Vishnu
    I'm currently developing a website where users will be able to upload content. I would like to be able to show a full-page ad whenever someone tries to view the content. The ad should take up most of the screen, and I should be able to have a "continue to the content --" link at the top. Preferably, I want something like what is currently on Forbes (if you haven't seen it, here: http://www.forbes.com/fdc/welcome.shtml but with an ad in the black area). Of course, the most revenue is the best. Thanks.

    Read the article

  • SEO effects of intermix of WP blog, custom PHP site and FB app game

    - by melbournetechlover
    We're a melbourne tech company in the process of building a custom site in PHP. We plan to launch a "pre-launch" page which is also custom coded (CSS3 on twitter bootstrap framework + HTML5 front end and PHP back end). On that site will be a link to a blog - the idea behind this is to build up ranking for a variety of relevant keywords prior to the full site going live (given the majority of the site is a member only community anyway so the blog is really the main way we'll be able to execute on-site SEO. Ideally, we would like to install wordpress in a subdirectory on our servers and just customise the header to look the same as the landing page of the website. But some questions and concerns... Is there any detrimental effect on SEO efforts in having two separate systems (one custom PHP, the other an installation of wordpress) to manage the blog vs the rest of the site? Are there any benefits or detriments to installing on a sub domain such as blog.sitename.com vs. sitename.com/blog. My preference would be sitename.com/blog as it feels neater - but open to suggestions based on knowledge of Google preferences. Separately, we are building a Facebook app which is under another site name. Again because we are launching this app first, from an SEO perspective, would it actually be better to run it from a sub domain on the main site - e.g. gamename.mainsitename.com instead of on app.gamename.com? Currently we have it on app.gamename.com, but if there are SEO benefits to moving it to the other domain and server then we'll do it. Basically we don't want to have our SEO efforts divided - will Google algorithms prefer two sites heavily referring traffic, or is it better to focus our efforts on one. I guess that's the crux of the issue. But the other one is - does Google care about traffic accessing a page built for the Facebook app iFrame - does that count toward rankings? Sorry I hope these questions aren't too complex - but we're in the tech world every day and still can't seem to find a good answer to these ones...hence I'm taking to the forums!! Free beer for whoever can give me a solid answer!

    Read the article

  • Font displays differently in Firefox vs. Chrome

    - by Goro
    It seems that my menu bar is displayed with a different font stretch in Firefox than it is in Chrome. See the following: Here is the CSS applied to this element: font-variant: small-caps; font-size:13px; letter-spacing: 0px; font-family: Arial; font-stretch: normal; text-decoration: none; As far as I can tell everything regarding that font is exactly the same, yet they still display differently (see pic). Any ideas? Thanks,

    Read the article

  • robots.txt, how effective is it and how long does it take?

    - by Stefan
    We recently updated the site to a single page site using jQuery to slide between "pages". So we now have only index.php. When you search the company on engines such as Google, you get the site and a listing of its sub pages which now lead to outdated pages. Our plan doesn't allow us to edit the .htaccess and the old pages are .html docs so I cannot use PHP redirects either. So if I put in place a robots.txt telling the engines to not crawl beyond index.php, how effective will this be in preventing/removing crawled sub pages. And rough guess, how long before the search engines would update?

    Read the article

  • Commenting System For A Website

    - by lvil
    I hope this is the right place for such a question. I am developing a website that has no users system. I am looking for a commenting system for the website. Requirements: Ajax commenting Flagging comments administration (deleting comments) php using my DB or external service No registration, no FB comments Option for a captcha Hebrew or customizable interface Can you please suggest such a system?

    Read the article

  • Accented characters representation in the URL

    - by Dan
    We have support for various languages in our website, including Spanish, French and Swedish. For now, the links in the site are NOT encoded before sent to the browser, sending the real accented chars (if such exists, i.e. href="www.(dot)example(dot)com/héllo.html") and not their HEX representation. This works & looks good on all browsers, including Chrome, FF and IE. However, we care great deal about SEO. We got this tip that encoding the links before sending them to the browser (so instead of linking to http://www.example.com/héllo, we will link to http://www.example.com/h%E9llo) will improve the way search engines will 'understand' the links and the keywords in the URL. This involves some work at our side, so we wanted to know if there's truth in that tip, but couldn't find anything addressing this issue.

    Read the article

  • Lazyloading images and SEO

    - by surpr
    Lazyloading images with a noscript fallback. Should I expect any damage in the SERPs? The site is completely thumbnail based. Also should I put a smaller image size in the noscript fallback to increase crawlability? We have nearly 1mil thumbs so it's a decision I'm hesitant to do. The reason why I'm thinking about it in the first place is because we're upping thumbail size about 50% which will add 10% of pagesize.

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >