Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 112/216 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Jquery Carousel - Need A Script Hack to Customize

    - by Leah
    I hope someone can help me out here. I am coding a site for a client using the carouFredsel. I am using this because it allows for variable widths with in the slideshow as seen here: http://2938.sandbox.i3dthemes.net/index-old.html. My problems are as follows: to use the built in auto center script the scrolling changes the white space in between images during the transition to fit the width of the wrapper. I need a hack to keep the white space during transition the same, like this: http://2938.sandbox.i3dthemes.net/index.html. Also, I can't figure out how to put this snippet into my code and make it work scroll: { onAfter: function() { if ( $(this).triggerHandler( "currentPosition" ) == 0 ) { $(this).trigger( "pause" ); } } }

    Read the article

  • Best way to setup multiple sites' emails in my Gmail

    - by John
    I've a dozen sites and I want all of their emails come to my one gmail id and I want to reply centrally from Gmail only. I've also added all of those emails in "send email as:" list in Gmail. I could add email forwarders in my Cpanel but in that case I'll not be able to send email whose inboxes haven't been created( for example [email protected]). If I create email account then I'd receive emails in my inbox as well as forwared by the forwarder( to my gmail id). Otherwise I can setup Gmail for my domain. But for a dozen emails I'm not sure if that'd be fine. I see in http://www.google.com/enterprise/apps/business/pricing.html that for up to 10 emails it is free. But then to send email from webhosting the php code will need SMTP login details and leaving my important gmail account details in my webhosting account is very risky given my sites have been compromised twice. What is the best way to centralize all my emails so that I can read/reply/search from single place?

    Read the article

  • To Fix HTTP 400-499 error codes with 301 redirects in .htaccess file

    - by user2131844
    Google previously indexed my websites pages (sitemap.xml) with below format: www.domain.com/2013/04/18/hot?test-gadgets-of-2013-to-include-in-?your-list www.domain.com/2013/02/09/rin?gdroid I have resubmitted the sitemap but there are still 404 errors in Google/Bing engine. Could you please help me to write 301 redirects rule in .htaccess file so when some clicks the URL for: www.domain.com/2013/02/09/rin?gdroid They should be redirected to: www.domain.com/rin?gdroid How we can write rule in .htaccess file to remove date part 2013/02/09/?

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • Is it good or bad to have dynamic content in page titles and/or description

    - by Gunjan
    In a local listing website, I append number of search results found in the description(not in title currntly) meta tag of the page as I think this is valuable for users for e.g. "Find address, phone numbers, blah blah blah for 21 outlets in locality. some more stuff after this..." as more places are added to the database, the description for the same page will change frequently. is this good or bad for SEO how about doing the same for title tags?

    Read the article

  • "X-Robots-Tag: noindex" on an HTTP 301 response

    - by Peter O.
    I understand that a resource with X-Robots-Tag: noindex forces some search engines, including Google, not to index the resource further. I also understand that an HTTP 301 response causes search engines to use the redirected URL instead of the original URL to refer to the resource. But what happens if both "X-Robots-Tag: noindex" and status code 301 occur on the same response? It's likely that the original URL will no longer be indexed, but will that cause the redirected URL to no longer be indexed too? This possibility is not mentioned in the X-Robots-Tag specification.

    Read the article

  • Cheaper alternatives to 99Designs.com (outsource CSS design)

    - by Chris Smith
    I'm designing my own website as a side project and I want the site to look professional. (Read, not designed by a programmer.) I don't mind spending a little money to have a professional do it, but design sites like 99designs.com cost way to much. (~$500+) Is there a cheaper (~$100 - $200) alternative for getting a designer to improve an existing site? (Things like updating the CCS or suggesting better ways for laying out the navigation.) Or is my best bet trying to pick up a freelancer on Craigslist?

    Read the article

  • I'm getting 403 forbidden error on my website

    - by user1230090
    I was accessing the directories through cyberduck and also trying to upload files.But now it started showing this forbidden error.I was getting the homepage first,now i dont get that too.Can anyone please tell me how can I get my website back to show [Fri Mar 02 14:36:21 2012] [error] File does not exist: /var/www/vhosts/example.com/httpdocs/bin [Fri Mar 02 14:37:24 2012] [error] File does not exist: /var/www/vhosts/example.com/httpdocs/httpsdocs [Fri Mar 02 14:39:01 2012] [error] (13)Permission denied: file permissions deny server access: /var/www/vhosts/example.com/httpdocs/index.html

    Read the article

  • Migrating Ruby on Rails Website to New Server (Linux)

    - by GarytheWorm
    I have an existing website that is a Ruby on Rails project. I have another server i need to transfer the existing website too. The server i wish to transfer too was originally hosting the website so has the necessary gems/configuration are installed. I have tar the current releases shared dir from the old server and transfered them over to the new server. I have then unpack the tar in the apps directory to the new location which is a different URL path. My problem is now as you can see below that the path on the current - is pointing to the old url. ( i ran ls -la to see owenership) How can i change this current path to read with my new web address? current releases shared sitepack.tar root@server1:/var/www/clients/client1/NEWSITE.com/web/apps# ls -la current - /var/www/OLDSITE.com/web/apps/releases/20120130171636 root@server1:/var/www/clients/client1/NEWSITE.com/web/apps#

    Read the article

  • CDN virtual subdomain causes duplicated content

    - by user3474818
    I have created a subdomain and a CNAME record which points to the domain root. The subdomain www.static.example.com is actually a copy of the entire website www.example.com and it is supposed to act as an CDN and serve static content in order to improve speed. However, all of my content can be accessed via subdomain aswell, so Google has indexed it all and now I am dealing with duplicated content. How could I deny access to crawlers for the subdomain baring in mind that I do not have different subfolder for subdomain, so I can't create a separate robots.txt file?

    Read the article

  • Google Analytics Views - Why Use Them?

    - by pee2pee
    I've been reading about Google Analytics views but still not sure why I would use them. I'm the only person in the company who understands and uses Google Analytics. We have no subdomains. Is there any reason why I would want to use views? Google Analytics has been going for some years now and I just created a copy of the original view but this has zero data, so I can't see how it would benefit me.

    Read the article

  • Recovering from an incorrectly deployed robots.txt?

    - by Doug T.
    We accidentally deployed a robots.txt from our development site that disallowed all crawling. This has caused traffic to dip dramatically, and google results to report: A description for this result is not available because of this site's robots.txt – learn more. We've since corrected the robots.txt about a 1.5 weeks ago, and you can see our robots.txt here. However, search results still report the same robots.txt message. The same appears to be true for Bing. We've taken the following action: Submitted site to be recrawled through google webmaster tools Submitted a site map to google (basically doing everything possible to say "Hey we're here! and we're crawlable!") Indeed a lot of crawl activity seems to be happening lately, but still no description is crawled. I noticed this question where the problem was specific to a 303 redirect back to a disallowed path. We are 301 redirecting to /blog, but crawling is allowed here. This redirect is due to a site redesign, wordpress paths for posts such as /2012/02/12/yadda yadda have been moved to /blog/2012/02/12. We 301 redirect to wordpress for /blog to keep our google juice. However, the sitemap we submitted might have /blog URLs. I'm not sure how much this matters. We clearly want to preserve google juice for URLs linked to us from before our redesign with the /2012/02/... URLs. So perhaps this has prevented some content from getting recrawled? How can we get all of our content, with links pointed to our site from pre-and-post redesign reporting descriptions? How can we resolve this problem and get our search traffic back to where it used to be?

    Read the article

  • Consolidating multiple domain names

    - by Mike
    I have a client that has three separately hosted copies of their website, each on a separate domain name. The websites are all essentially the same, bar a few discrepancies caused by badly managed updates in the past. I will soon be launching a completely new website for them, at which point, all three domain names are to resolve to the same web server. One domain name will become the default domain name that they refer to in all their literature, and the other two will simply be used as catch-alls for old links, bookmarks, and so on. I would like to know what people consider the best route to achieve this. My plan so far is: Get the new site up and running on the new webserver. Change the relevant A record of the default domain name to point to the new webserver. a) Keep the existing hosting accounts in operation. Create a list of 301 redirects from old page names on the old site to new page names on the new site. or b) Configure CNAME records for the non-default domain names, each pointing to the new webserver. Create a list of 301 redirects on the new site that redirect from old page names to new page names. If my understanding is correct, 3a will help to maintain whatever search engine rankings the sites already have (I know it's not going to be perfect), while at the same time informing search engines that the old domain names are no longer in use. What's a good approach to take here?

    Read the article

  • Tackling thin content on an images gallery

    - by Ted Wilmont
    We run an images gallery as part of our site, however we have over 8,000 images and every image has a separate HTML page of its own to display the image caption, related image and comments from users of the site. This seems to be a problem especially with the Google Panda update because these pages are technically "thin content". What would be the best way to tackle this? We'd love some feedback and advice regarding this scenario. We have a few options we thought of already but can't decide: We could noindex the separate image pages and loose any image search listings we have for the image in favour of removing these thin pages from the index. We could 301 all of the individual image pages back to the image category listing and anchor each image (e.g. #img2122) and include all of the comments and description on the category listing page itself. If we was to simply list all of the images and content on the category pages themself; what's the best method? We could add all of the content in the anchor tags and use jQuery to display them in a box when a user clicks on the image or we could use Ajax to retrieve the information. However, what's the best Ajax method for SEO? Any ideas, suggestions, tips or advice is greatly appreciated and thank you in advance for any given.

    Read the article

  • .com.au backordered domain: Do I have to return it if the original owner asks for it?

    - by vDog
    I was contacted by the original owner of a domain to give him the domain that I backordered a few weeks ago. The domain was abandoned for about 2 months before I bought it to eliminate the competition of my client but now I am faced with a threat that he will take this matter to court and AUDA (.au domain administration limited). Am I supposed to handover the domain that I have bought legally? I would like to know my rights in this situation.

    Read the article

  • Google Authorship: can I display:none for link to profile?

    - by RubenGeert
    I'd like to have my 'mugshot' in Google's SERPs but I couldn't care less about Google+. I don't really want to link my website to Google+ either. Can I use CSS display:none; on the link leading to my profile and still have authorship, which looks like <a href='https://plus.google.com/111823012258578917399?rel=author' rel='nofollow'>Google</a>? Will the nofollow attribute here spoil things? I don't want to lose 'link juice' on Google+ if I don't have to. Now Google should crawl only the HTML but I'm sure they'll figure out the link is not visible (perhaps it's technically even cloaking. Does anybody have experience with this situation? And do I really have to become (reasonably) active on Google+ in order for authorship to show? This answer suggests I do but I didn't read anything on that in Google's guidelines.

    Read the article

  • Why is Google still not indexing my !# website?

    - by Zubair
    I have been working on a website which uses #! (2minutecv.com), but even after 6 weeks of the site up and running and conforming to the Google hash bang guidelines stated here, you can still see that Google still hasn't indexed the site yet. For example if you use Google to search for 2MinuteCV.com benefits it does not find this page which is referenced from the homepage. Can anyone tell me why Google isn't indexing this website? Update: Thanks for al lthe help with this answer. So just to make sure I understand what is wrong. According to the answers Google never actually indexes the pages after the Javascript has run. I need to create a "shadow site" which google indexes (which google calls HTNL snapshots). If I am right in thinking this then I can pick a winner for the bounty

    Read the article

  • Does a system exist to facilitate virtual meetings and file sharing?

    - by CSharp Mania
    I'm looking for a system that is similar to an online classroom setup but allows for virtual meeting rooms with video/audio conferencing, and of course file sharing. I'm preferring an open source solution that I can edit/tweak myself as needed, and is of course free. Ultimately, I guess what I'm looking for is something that we could possibly tweak to give our own "branded" look and feel, if possible, along with full integration within our own servers. Thus the reason I brought up open source solutions. Do you masters of the web know of such a system available? If so, do you have a preferred one that you would suggest? OR, can such a system be developed by slapping together a couple of open source projects to derive at what is desired? Thanks for sharing your expertise. (FYI - I am a developer that is comfortable with PHP and C#. I'm not experienced with Ruby or Python, but a system using them or something else is acceptable. We can figure it out I'm sure.)

    Read the article

  • Assign subdomains to seperate ports on web server

    - by Michael Frank
    I have set up an Abyss web server as a little experiment, and I want to know if it is possible to assign subdomains to different ports on the machine the web server is running on. I have a couple webUIs that I'd like to assign subdomains: 192.168.1.1:8000 becomes example.com/webui1/ 192.168.1.1:8001 becomes example.com/webui2/ The webUIs are available by accessing their ports via example.com:8000. I have tried using a reverse proxy, but it seems that this is only usable on one internal IP at a time. What other options do I have?

    Read the article

  • When will my old page stop appearing on Google?

    - by Bane
    I recently bought a new address for my Blogger blog, from yannbane.blogspot.com to www.yannbane.com. However, www.yannbane.com addresses do not appear when they are searched for! Is this natural? How much time will it take for Google to update its index? yannbane.blogspot.com 301's to www.yannbane.com. Both are added to my Webmaster Tools account, but it shows no data for www.yannbane.com (strangely). And, finally, is there something I could do to speed up the process?

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • Packaging a web application for deploying at customer site

    - by chitti
    I want to develop a django webapp that would get deployed at the customer site. The web app would run in a private cloud environment (ESX server here) of the customer. My web app would use a mysql database. The problem is I would not have direct access/control of the webapp. My question is, how to package such a web application with it's database and other entities so that it's easier to upgrade/update the app and it's database in future. Right now the idea I have is that I would provide a vm with the django app and database setup. The customer can just start the vm and he would have the webapp running. What are the other options I should consider?

    Read the article

  • Ditch cPanel / WHM in favour of manual seup

    - by BWRic
    We currently use cPanel / WHM on a reseller account but are looking at getting a dedicated server. My first thought was to duplicate this set up on the dedicated box to allow us to quickly create new accounts. I'll be a managed server so they'll have set up the LAMP stack. I'm curious if I actually need cPanel and WHM. We don't use many of the features from cPanel / WHM, just creating accounts and databases, clients do not have FTP access. I'm no sys admin and come from a Windows / GUI background but have some knowledge in setting up development servers. WHM: Creating accounts I presume this sets up the Apache virtual host, FTP access and DNS settings. I've some knowledge of editing the Apache files to create virtual hosts. Am I correct in thinking as long as the DNS is pointing to the server IP and the virtual host is configured the server can serve the (php) pages? I'm not sure I need per site FTP access as only we will have access so I could have a server wide/htdocs only access to view all the site. The company who supply the dedicated hosts would also provide the own DNS management tool so I'm not need to cPanel one. MySQL: Creating users and databases We use cPanel to create the MySQL users and databases. As it's a dedicated box and I can have root access I think this could be replaced by SQLyog for db management and phpMyAdmin for user management. Do you I need cPanel or can I get by editing a few text files for creating the accounts, then use the MySQL tools for databases? Or am I missing something major with how the sites are configured?

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

  • Multisites Network SEO::Can self-referencing canonical tag(rel="canonical") inside article improve google rating?

    - by user5674576
    Hi, Can self-referencing canonical tag(rel="canonical") inside article improve google rating? The Case: Company have 40 sites with original content and 1 main site with some of 40 sites articles. Main site have rel="canonical" in each article Should article in original site have also rel="canonical" for self-referencing? example: inside main network site(reference to other site):<link href="http://site7.com/article25" rel="canonical" /> inside original network site(self-reference):<link href="http://site7.com/article25" rel="canonical"/> Thanks in advance

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >