Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 136/216 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Ideas to tackle unwanted bad press/review on Google's SERP?

    - by Rob
    After Googling our company name to our horror we've found someone on Yelp.co.uk has reviewed our company. On the SERP your eye is immediately drawn to the 2 star review some complete stranger has written, which to be honest is pure slander! The most infuriating thing is the person who reviewed our company has never even been a client/customer. It's a bit like me reviewing a restaurant having never eaten or even been in there! We've sent her a private message on Yelp to remove the review and also sent a complaint to Yelp themselves but have yet to get a reply. We've resisted going mad at the reviewer and also requested that she re-review us having just relaunched our new website (it still riles us that she's not even a client though!). We've had genuine customers/clients review us on Yelp yet this 2 star review remains on Google's SERP. Roughly how long would it take to for our new reviews to over take this review? Does anyone have any suggestions as to how we can push the review off the 1st page of Google's SERP or any creative ways in which we can tackle this issue?

    Read the article

  • Usefulness of the Backlinks shown in Webmaster Tools

    - by Ewan Heming
    Is the list of links for a site shown in Google Webmaster Tools a complete list or just a sample? I've noticed that the links in there appear to be all the ones I didn't think would have any real value - either because they were nofollow or from irrelivant sites. The few I did think would be some use have never shown up and there's also some links that are sometimes there and sometimes not (such as my linkedIn profile). Does this mean that the missing links don't/no longer carry any value? It almost appears that the list is there for Google to either inform you about problems (there was a useful list there when someone tried to SPAM my site) or mis-imform you about which link-building strategies work or not (to keep people guessing about what works or not).

    Read the article

  • Finding/hiring help with server administration

    - by letseatfood
    I need to install the fileinfo extension for PHP on my server. My hosting service does not support help with this. I know I could learn it and do it myself, but since I am an independent contractor and I know that server administration is the weakest part of my abilities, a DIY approach is going to cost me a lot more time than I can afford. What is the best way to go about hiring a trustworthy contractor to either install the fileinfo extension for me or to train me in how to do this. Thank-you.

    Read the article

  • SEO tool is telling me title, description and keywords don't exist, but they do. Where is the problem?

    - by DaveDev
    I'm using the following tool to analyse how 'optimal' a site that I'm working on is for search engines: http://tools.seobook.com/general/spider-test/ I enter the URL for the site - http://ftmsuat.moneymate.com - into the search bar, and it returns a breakdown of the contents of the page. I'm a little confused by what I see though. According to the results, the page doesn't have a title, description or keywords. But if you check the source of the page, those elements are definitely there. So I'm wondering now, which is wrong? seobook.com or my page?

    Read the article

  • Does Jquery and Mootools usually have conflict if both are used on a webpage? [migrated]

    - by Charming Prince
    I have this website am designing, i tried using mootools 1.31 to animate some of the div boxes when clicked or when the mouse hover rounds it, to shows the content. the thing is that it doesn't seem to work on the webpage, but if i try the same script on a blank webpage it works, am thinking probably it's because i have Jquery 1.52 on the same page and maybe both scripts are conflicting with each other because, if i remove the Jquery, the Mootools works. What should be my option, because i need the Jquery to do some validations for me, so i can't remove it completely. Here are the codes <script> //-vertical var mySlide = new Fx.Slide('test'); $('slidein').addEvent('click', function(e){ e = new Event(e); mySlide.slideIn(); e.stop(); }); $('slideout').addEvent('click', function(e){ e = new Event(e); mySlide.slideOut(); e.stop(); }); $('toggle').addEvent('click', function(e){ e = new Event(e); mySlide.toggle(); e.stop(); }); $('hide').addEvent('click', function(e){ e = new Event(e); mySlide.hide(); e.stop(); }); </script> here's the HTML <html> <h3 class="section">Fx.Slide Vertical</h3> <a id="slideout" href="#">slideout</a> | <a id="slidein" href="#">slidein</a> | <a id="toggle" href="#"> toggle</a> | <a id="hide" href="#">hide</a> <div id="test"> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad mi nim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </div> Here's the CSS #test { background: #222; color: #fff; padding: 10px; margin: 20px; border: 10px solid pink; } #test2 { background: #222; color: #fff; padding: 10px; margin: 20px; border: 10px solid pink; } Am using the exact same code supplied by Mootools in their own example, if i do this on a blank webpage it works but incorporated into my own webpage, it doesn't, and my own page just have the script tag of the Jquery in the head section of the HTML.

    Read the article

  • Alexa indexing browsing history?

    - by Haluk
    We have this test.php sitting around in a forgotten folder. It is a script which just sends an email to our site admin. We never had a page linking to it. It is not indexed by Google. It does not exist in the Internet Archive Wayback Machine. But every now and then it gets crawled by ia_archiver. I wonder how it got indexed. Could it be because of the Alexa toolbar installed on our computer? Does Alexa index our personal browsing history?

    Read the article

  • Do Not Track feature of IE10

    - by Pete Herbert Penito
    One of our clients is getting a bit worried about the new "Do Not Track" feature of Internet Explorer 10. Her site is heavily dependent on php sessions (as I imagine many other sites are). This was what she was reading: http://www.bbc.co.uk/news/technology-18288710 I need some clarification, will this affect how sessions (or cookies) work on normal web sites that use the PHP $_SESSION array? Or is it regarding only how advertising works (engadget's article seems to insinuate this)? Can anyone provide a more technical overview (and the ramifications) of PHP-powered websites?

    Read the article

  • Using paypal to process credit cards in Sweden through an API [on hold]

    - by Mastikator
    I'm looking for a Paypal API that lets me process credit cards to make payments without being redirected to a paypal site and without enforcing consumers to use their paypal account. And it needs to work in Sweden. The ones I've looked at (dodirectpayment, expresscheckout, paypalpro gateway) and none of them have let me process credit cards in Sweden via an API that doesn't force the user to visit the paypal login site. I have a form on my webpage that the user types their credit card number, ccv2, expiration, name, address, etc. I need an API that works in Sweden that simply processes the request, and it has to be without the step of being redirected into a paypal website. The ones that I have found only worked in a select few countries, is there an international solution? I've already spent over 12 work hours just looking for an API that meets my requirements.

    Read the article

  • Should I set NOINDEX header for my JS, CSS and image files?

    - by Yoga
    Are there any harms if my site send NOINDEX headers for all my static assets? For image files, I refer to those valueless, e.g. background images, button images, etc. Update: more background information I have this concern is since recent Google said they also execute JS and they might fetch content via Ajax. So, for example, if I send noindex for my jQuery script, so Google would not be able to use them to load Ajax, I suppose it is not good for my site's SEO, right?

    Read the article

  • Apache2 and FTP

    - by Jo Colina
    I just set up an Apache web server on my Raspberry Pi, along with MySQL and PHP5, and to upload files i set up vsftpd. The thing is that the ftp connection sent me to my pi user home directory, instead of /var/www . So i changed Pi home directory to /var/www and changed it again to it's previous home. FTP now sends me to /var/www but whenever I upload files other rights are null. (Apache sends a 403 Forbidden every time unless I manually chmod the files inside /var/www uploaded via ftp) Does anyuone know how to fix this? Thanks!

    Read the article

  • chmod 700 and htaccess deny from all enough?

    - by John Jenkins
    I would like to protect a public directory from public view. None of the files will ever be viewed online. I chmoded the directory to 700 and created an htaccess file that has "deny from all" inside it. Is this enough security or can a hacker still gain access to the files? I know some people will say that hackers can get into anything, but I just want to make sure that there isn't anything else I can do to make it harder to hack. Reply: I am asking if chmod 700 and deny from all is enough security alone to prevent hackers from getting my files. Thanks.

    Read the article

  • Is there any advantage/disadvantage to using robots.txt to disallow access to legal pages such as terms, privacy policy, etc.?

    - by CaptainCodeman
    As I understand, having repetitive content is a detriment to search engine placement. Given that many websites that use similar or even identical "Terms and Conditions" and "Privacy Policy" pages due to similar legal wording or due to copy & pasting from the same source, would it be a good idea to disallow access to these pages via robots.txt, in order to avoid being penalized for "non-original content"? Or, on the contrary, could the search engines identify this as circumvention and penalize the site for trying to hide content? Or does it not matter?

    Read the article

  • AdSense sent an email saying my account has been approved when it already was approved

    - by moomoochoo
    My account has been approved and running adverts for quite sometime now. However, today I just got a message (it seems legitimate) from Google AdSense saying: Congratulations, your AdSense account has been approved to show AdSense ads on your own website. Within a few hours, you will begin to see live ads. Should I be concerned? They say that they review accounts to check for compliance, could this be some weird way of saying they rechecked my sites and they complied?

    Read the article

  • Affiliate software to attract incoming customers

    - by Steve
    I am close to starting a new website for a small business which imports products from USA to Australia. The wholesaler says he will allow my client to be the sole distributor for Australia & New Zealand. I'm not sure what CMS or shopping cart software to use yet, but it will need to include an affiliate system to allow advertisers to push customers our way. Do you have any suggestions for robust, flexible affiliate software?

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

  • Leveraging a hosted web font service from a local development server?

    - by Tom Auger
    There are a number of popular web font services on the market today who "host" the fonts and serve them to your web page via javascript or CSS pointing to remote locations. For example http://webfonts.fonts.com or http://typekit.com However, there seems to be an issue when you're developing on a local testing server - the remote font services don't validate the font and return 403 access denied errors and the like. What workarounds are there for using remote services such as a hosted font service, on a local development server?

    Read the article

  • Dedicated server: managed hosting or manage it myself?

    - by ddawber
    We're currently hosting a number of sites on a self-managed dedicated server. Some companies, however, offer a managed dedicated server hosting service. They offer: Roughly the same server spec Ticketing system support Managed daily backups Virtual firewall (but with a limit of 10 IP addresses allowed through at any one time) Now, this managed hosting is at extra expense - somewhere in the region of $500 per month, and the limit on the number of IP addresses they'll manage on the firewall is also a real pain. My thinking is it would be better and cheaper to Stay with the same host since the dedicated box is fine Get an Amazon AWS account and use their server to manage backups; there are a number of good tools that can be used to automate the process Configure iptables so that I have complete control of the firewall I want to know Is a managed virtual firewall likely to be more secure than me configuring iptables? Whether, in your opinion, it's best to let someone else take care of backups? If, from your experience, there's anything else i'm missing that warrants using managed hosting over a DIY service? I think there is some reluctance to not having managed hosting since a managed host in effect takes responsibility for your server, whereas any hardware or security issues with a server that we manage would mean we are forced to hold our hands up when a client site goes down. That said, I personally don't think a managed host does that much in the day to day running of your server (backups are automatic, OS updates are carried out with ease, etc.).

    Read the article

  • Which of these URL scenarios is best for big link menus? [seo /user friendly urls]

    - by Sam
    Hi folks, a question about urls... me and a good friend of mine are exploring the possibilities of either of the three scenarios for a website where each webpage has a menusystem with about 130 links.: SCENARIO 1 the pages menu system has SHORT non-descriptive hyperlinks as well as a SHORT canonical: <a href:"design">dutch design</a> the pages canonical url points to e.g.: "design" OR SCENARIO 2 the pages menu system has SHORT non-descriptive hyperlinks wwith LONG canonical urls: <a href="design">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest OR SCENARIO 3 the pages menu system has LONG descriptive hyperlinks with LONG canonical urls: <a href="dutch-design-crazy-yes-but-always-honest">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest Currently we have scenario 2... should we progress to scenario 3? All three work fine and point via RewriteMod to the same page which is fetched underwater. Now, my question is which of these is better in terms of: userfriendlyness (page loading times, full url visible in url bar or not) seo friendlyness (proper indexing due to the urls containing descriptive relevant tags) other concerns we forgot like possible penalties for so many words in link hrefs?? Thanks very much for your suggestions: much appreciated!

    Read the article

  • SEO for images: can I use a different (cookieless) domain?

    - by Oliver
    Hello, We want to increase the value of some of our important images by means of SEO, and we want to start serving them from a different, i.e. cookieless, domain. We want to go from http://www.example.com/images/1234.jpg to http://www.example.com/germany/bavaria/landscape.jpg which can easily be done via URL rewriting. Then on the other hand, we would like to serve the image from a completely different domain, let's say http://www.examplestatic.com/germany/bavaria/landscape.jpg, to save the overhead of sending the cookie from www.example.com. Somehow I feel that this is not a good idea because I move the image away from the content by putting it on a different domain. Can anyone shed some light on this problem? Naturally, I would just use a different subdomain, e.g. img.example.com, but we already use subdomains for languages and our cookies are valid for all subdomains of example.com, so this won't help. I'd really appreciate any hints. Cheers,

    Read the article

  • Is there a better way to have a two column website with header and footer, equal height columns and stretchy column widths? [closed]

    - by Seamus
    I wrote a website a while ago that is a little messy in how it does things. I used this CSS template and this equal height columns trick. I have not one but two container divs and I can't remember what they're doing. So I'm thinking of re structuring the thing from scratch, and possibly making use of the more "semantic" html5 tags like <nav> and so on at the same time. The question is: is there a better way to achieve a site structure with these properties: 2 equal height main columns (with widths as percentages of the available real estate, not explicitly stated) both a header and footer element that stretch the whole width of the total of the two main columns That allows the use of semantic html5 tags instead of meaningless divs

    Read the article

  • URL-rewriting on Plesk using ISAPI_rewrite3 Lite

    - by Anusha
    I am using Plesk Windows based web server with Windows 2008 server OS with IIS-6 for my e-commerce website. I want to rewrite URLs for all dynamic pages, So I installed ISAPI_Rewrite 3 Lite on my web server also I had uploaded the .htaccess file with the basic rules as follows RewriteEngine on RewriteRule ^contact\.html$ contactus.php? [NC,R] I never worked before with ISAPI neither on URL- rewriting. My doubt is How should I proceed after installation. Should I upload .htaccess or httpd.conf file OR This s/w has ISAPI_Rewrite Manager which gives place to edit httpd.conf, Should I write rules on this. Anyways I had tried all these steps but unfortunately I couldn't find any remedies. Any immediate solution will be appreciable.

    Read the article

  • How to create email request forms and auto-responder?

    - by mfc
    I'm building a site in css and I'm pretty new to any code or script other than html and css. I'm trying to create a landing page that requires an email from visitors and set up an auto responder to send to that newly submitted email. This would also be a signup for email newsletters. I have some idea how to create the form and have looked into a bit. I don't know how to make it a requirement to get past the landing page and into the actual website or set up the auto-responder. Any help would be much appreciated. Or if someone knows of a source that explains how to do this thing in particular it would be wonderful. I tried lynda.com but everything is so general and I can't seem to find info on exactly how to do this but I know its quite common. Thanks!

    Read the article

  • robots.txt, how effective is it and how long does it take?

    - by Stefan
    We recently updated the site to a single page site using jQuery to slide between "pages". So we now have only index.php. When you search the company on engines such as Google, you get the site and a listing of its sub pages which now lead to outdated pages. Our plan doesn't allow us to edit the .htaccess and the old pages are .html docs so I cannot use PHP redirects either. So if I put in place a robots.txt telling the engines to not crawl beyond index.php, how effective will this be in preventing/removing crawled sub pages. And rough guess, how long before the search engines would update?

    Read the article

  • How to choose, set and use keywords while structuring a website?

    - by mechdeveloper
    I have been working on my personal website for sometime, I think I have been doing a good technical job, but, unfortunately I did a terrible job while structuring the website because I didn't care about the keywords I was going to use. Although it is my personal website, I'd like to mention the main objective is the blog of the website, so I'd like that the keywords were related to the content that it is in the blog, at present google webmaster tools is displaying a lot of keywords that has nothing to do with the content of the website, and some SEO reporting websites such as woorank says that the keyword optimization of the website is awful, So I have 3 questions: How to choose, set and use keywords while structuring a website? OPTIONAL: which are all the methods and sources used by search engines to collect the keywords of a website? there are some high profile websites that aren't optimized on this as well, should I concerned about this anyway?, is there anything more important that I should be concerned about? (if you want to see the website please check my profile)

    Read the article

  • Why do 410 pages show as errors in Google Webmaster Tools?

    - by ElHaix
    To remove links from our site, we return a 410 code on on the links we want removed, and shows The page you requested was removed.. In Webmaster Tools, I see all the 410 pages in Crawl Errors / Not Found. I'm worried that because they appear in Crawl Errors that they could be negatively affecting SEO rankings. Is that the case, and if so, should I change the return codes from 410 to something else?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >