Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 102/216 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • I want to host clients' websites, but not their email. What's the easiest way to handle this?

    - by Phil
    My company lets non-technical users build their own niche industry websites on our server, which we host. they can currently point their nameservers at their registrar to us, which ends up with them no longer having access to their email if they've already set it up through said registrar. We don't want to interfere with their existing email, nor do we want to get into the business of setting up email for them through our service. Thus, having them point A records/cname to us would work, but is this too complex for a non-technie user? We thought of having them point nameservers to us but pointing the MX records back to them, but this is also beyond their scope. Is there an easy way to 'point records' at their initial state? Any other ideas/feedback?

    Read the article

  • How to properly remove URL's from Google's index?

    - by ElHaix
    On some of our sites, we now have several thousand pages that dilute our website's keyword density. The website is an MVC site with SEO routing. If I submit a new sitemap with say only the 2000 or so pages that we want indexed, even though navigating to the diluting pages still works, will Google re-index the site with only those 2000 pages, dropping the superfluous ones? For example, I want to keep roughly 2000 of the following: www.mysite.com/some-search-term-1/some-good-keywords www.mysite.com/some-search-term-2/some-more-good-keywords And remove several thousand of the following that have already been indexed. www.mysite.com/some-search-term-xx/some-poor-keywords www.mysite.com/some-search-term-xx/some-poor-more-keywords These pages are not actually "removed" as navigating to these URL's still renders a page. Even though there are potentially hundreds of thousands of pages, I only want say 2000 to be re-indexed and retained. The others removed (without having to do these manually). Thanks.

    Read the article

  • How do I allow e-mail to be relayed through this MTA?

    - by BlueToast
    When I try to send an e-mail using authenticationless relay via telnet, I receive an error message "553 sorry, that domain isn't allowed to be relayed thru this MTA (#5.7.1) rcpt to:[email protected]". How can I allow a specific domain to be whitelisted and allowed through the MTA? There is only one domain I am trying to relay e-mails to (and that domain uses a totally different, independent and standalone mail server with IceWarp). 220 mail4.myhsphere.cc ESMTP ehlo sisterwebsite.com 250-mail4.myhsphere.cc 250-PIPELINING 250-8BITMIME 250-SIZE 41943040 250-AUTH LOGIN PLAIN CRAM-MD5 250 STARTTLS mail from:[email protected] 250 ok rcpt to:[email protected] 553 sorry, that domain isn't allowed to be relayed thru this MTA (#5.7.1) rcpt to:[email protected] 553 sorry, that domain isn't allowed to be relayed thru this MTA (#5.7.1) rcpt to:[email protected] 553 sorry, that domain isn't allowed to be relayed thru this MTA (#5.7.1) rcpt to:[email protected] 250 ok data 354 go ahead To: [email protected] From: [email protected] Subject: Test mail -- please ignore Test, please ignore this Jane Sincerely, BlueToast . 250 ok 1350407684 qp 22451 quit 221 mail4.myhsphere.cc Connection to host lost. C:\Users\genericaccount Not sure what to do. I did some Googling but I'm having a hard time finding relevant results. Most of the search results I get are about trying to receive mail -- but I am trying to send mail. mail.sisterwebsite.com = mail4.myhsphere.com. We use FluidHosting for the e-mail on sisterwebsite.com. (Repeating question just in case) How can I allow a specific domain to be whitelisted and allowed through the MTA?

    Read the article

  • Best way to provide folder level 301 redirect

    - by Vinay
    I have a website hosted in yahoo small business, I don't have access to .htaccess file. I have around 220 pages in a folder 'mysubfolder' (http://mysite.com/myfolder/mysubfolder). And the age of website is around 3 years. I am planning to move all 220 pages in 'mysubfolder' to 'myfolder' (one level up). All the pages are under 'mysubfolder' are indexed. what is the best way to do this.So that it should not affects the SEO.

    Read the article

  • What is the best way to promote a programming blog?

    - by paul
    (The guys from 'Programmers' referred me here...) How do you promote your programming blog? I recently started http://blackforestcoder.blogspot.com/ to record my progress working with new technologies and ideas. The main aim being to provide a list of pitfalls and solutions and also to get feedback from readers. Since I set it up 10 days ago I have only had about 2-3 hits even though Google is supposed to be indexing it. How might I boost the hit rate?

    Read the article

  • Can I import an existing member data used in old ASP to a new ASP.NET membership database? [closed]

    - by Rick Brown
    I have an old website that I designed and still maintain using old ASP that has a membership database (MS-SQL) that I built from scratch. It is a very simple database that has all the user information in one table (including login info and personal info) and then details and other odds and ends in other tables. It is WAY past time to upgrade this to .NET, especially since I need to add a Paypal payment system into it as soon as I can. I've designed several other sites with membership in .NET, but they have all been from scratch. Is there an easy way to transition from the old ASP site to a new .NET membership database without losing the data? There are hundreds of users with thousands of records relating to those users that I'd rather not lose, if possible. Any ideas on a relatively painless way to do this?

    Read the article

  • How to change internet explorer settings through Javascript? [on hold]

    - by Abhi
    I have a webpage which fetched value dynamically from a config file(whose contents changes after some interval). Initially i thought it might be some problem with my code but later when i cross checked it with other browsers it ran successfully. On my further research i changed some settings in internet explorer regarding the temporary file. Tool-internet options-browsing history(settings). i selected "Everytime i visit the webpage" from amongst the 4 options that i had. I wanted to know can set it programatically?

    Read the article

  • Does anyone know any good resources for learning how to market a web app?

    - by Jack Kinsella
    I'm a developer first and foremost. I write web apps but have a hard time generating traffic and converting potential users once I've released my product into the wild. I know I need to learn more about marketing but I don't know where to start as I've no baseline to judge the quality of the materials I stumble across. Does anyone know any websites, blogs, e-books or other resources for learning how to market effectively?

    Read the article

  • Google Goals process not working through similarly named pages

    - by David
    Well, I'm at a loss. I've ensured that my tracking script is in etc etc, and I've set up my goal and funnel path, but only the first step is ever being shown on the funnel. Goal URL: /checkout/checkoutComplete/ Type: Head Match ... but should this be /checkout/checkoutComplete/(.*) and set to regex rather because there are parameters after the main part of the URL (I thought that's what head match was for) Step 1: /checkout/ <-- required Step 2: /checkout/confirm/ both the above are valid and correct URLs for my domain. But for some reason, the funnel visualization shows entries into the first step, then an exits count that matches the entry count, including /checkout/confirm - but it doesn't go on to the next step! Perhaps I'm doing something obviously wrong...but I can't quite see it? Also, semi-related questions. Making a change to the funnel, does it only affect new incoming data? And how often does it update? Thanks in advance for your help.

    Read the article

  • Using disqus for a website (question on SERP and backlink)

    - by Homunculus Reticulli
    I am building a website and am trying to decide between writing my own commenting system and using disquss. One of the main deciding factors is that (obviously), I want comments left on my page to be show on SERPS. However, I remeber reading somewhere that disqus loads comments into a page using AJAX - and therefore the comments are "invisible" as far as Googlebot and other SE crawlers are concerned. Could someone confirm or refute this? The other question I have is about whether (as a commenter), When I place a comment on another website using disqus (including any links I may add to my comment), do the lionks in my comment count as a backlink (in other words are they "dofollow" or "nofollow" links)?

    Read the article

  • Wordpress Multisite and Google Analytics in subfolders with mapped domains

    - by David
    I have a wordpress multisite with sub folders. The site's subfolders are mapped to domains, which are set to primary. I'm using the 'Google Analytics Multisite Async' code to track things. From what I can see it's tracking the sites fine (getting page hits for each site in google analytics) baring the original site in the Multisite which in content overview lists domains then the amount of traffic it's getting along with the orginal domains traffic. I don't want to track any other traffic for my orginal site than what goes to that. i.e. I don't want it tracking my other sites in multi-site. e.g. domain1.com is my orginal and I have lots of other sites in the multisite lets say domain2.com, domain3.com. In content overview in Analytics it's listing say domain2.com as content. Can I tell it to filter these out some how either in Analytics or within WordPress? Hopefully explained that clearly!

    Read the article

  • How to 301 redirect from old query string urls to CakePHP Canonical urls?

    - by Daniel Bingham
    I currently have a .htaccess file that looks like this: RewriteCond %{QUERY_STRING} ^action=view&item=([0-9]+)$ RewriteRule ^index\.php$ /index.php?url=item/%1 [R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] It is meant to 301 redirect my old query string based URLs to new CakePHP urls. This will successfully send users to the correct page. However, Google doesn't seem to like it (see below). I previously tried doing this: RewriteCond %{QUERY_STRING} ^action=view&item=([0-9]+)$ RewriteRule ^index\.php$ /item/%1 [R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] But that fails. The second rewrite rule doesn't seem to catch the rewritten URL. It goes straight through. Using the first version wouldn't be a problem, except that I suspect that is what is choking up Google. It hasn't indexed my sitemap full of the new URLs. My old sitemap had been fully indexed and all the URLs are in Google's index. But it isn't following the redirects from the old URLs to the new. I have a 'not followed' error for every one of the query urls that was in my old sitemap. Am I properly using a 301 redirect here? Is it the weird rewrite rule? What can I do to send both Google and users to the proper page and save my page rank?

    Read the article

  • Parallel downloading of JavaScript files on page load

    - by user359650
    Below is a quote from one of the Yahoo performance pages: While a script is downloading, however, the browser won't start any other downloads, even on different hostnames. When I look at page load of our website, I can see that many scripts are being downloaded at the same time: Am I mistaken, or should the quote should instead read like this? While scripts are downloading (there can be several scripts downloading at the same time), the browser won't start any other downloads, even on different hostnames.

    Read the article

  • schema.org 'reviewRating' tag not recognized by google snippet testing tool

    - by saravanak
    I'm trying to add more structural information to my webpages by using the microdata format suggested in www.schema.org. The procedure seems straight forward but I'm having issues validating my results in the Google Rich Snippets Testing Tool. Check out this review page, here I'm using the 'reviewRating' property item to specify rating values for that particular review. I followed the same format as defined in schema.org/Rating but this markup fails validation in Google's rich snippet testing tool with the following error info. Item Type: http://schema.org/rating reviewrating = 5 ratingvalue = 5 Warning: Property "reviewrating" was not found.

    Read the article

  • You don't have permission to access /index.php on this server

    - by Tran Dinh Thoai
    I made a 'login with OpenID' page and I had a error when OpenID provider return to my page: You don't have permission to access /index.php on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. If I remove parameters which are returned by OpenID provider, the page run well. How can I fix this problem? The login page that cause error is: http://bryox.com/login

    Read the article

  • Google sitemap HrefLang tag without the main site url

    - by Rashmi Pandit
    We have websites with multilingual content. e.g. http://www.example.com/about-us/ http://www.example.com/en-HK/about-us/ http://www.example.com/en-GB/about-us/ http://www.example.com/zn-CH/about-us/ We need to configure the hreflang tags in sitemap for Google to know that there are alternate links for the same pages in different languages. I know for the above example that my sitemap url tag would look like this: <url> <loc>http://www.example.com/about-us</loc> <xhtml:link rel="alternate" hreflang="en-GB" href="http://www.example.com/en-GB/about-us"/> <xhtml:link rel="alternate" hreflang="en-HK" href="http://www.example.com/en-HK/about-us"/> <xhtml:link rel="alternate" hreflang="zn-CH" href="http://www.example.com/zn-CH/about-us"/> <changefreq>daily</changefreq> <priority>0.8</priority> </url> However, if I don't have the main url but just the last three ones with en-HK, en-GB and zn-CH, then how should my url tag look? Should I just skip the loc tag and keep the three xhtml:link tags? Or can I specify any url in the loc tag and put the remaining two in xhtml:link tags? I am new to Google sitemaps. Any help is greatly appreciated. Thanks, Rashmi Edit: From the answer posted on http://stackoverflow.com/questions/18423624/sitemap-for-domain-with-multilanguage-site/18423803#18423803, for my example with sites in en-HK, en-GB and zn-CH, should there be three url tags, with each of them assigned to loc with the other two in xhtml:link?

    Read the article

  • Is there a modern (eg NoSQL) web analytics solution based on log files?

    - by Martin
    I have been using Awstats for many years to process my log files. But I am missing many possibilities (like cross-domain reports) and I hate being stuck with extra fields I created years ago. Anyway, I am not going to continue to use this script. Is there a modern apache logs analytics solution based on modern storage technologies like NoSQL or at least somehow ready to cope with large datasets efficiently? I am primarily looking for something that generates nice sortable and searchable outputs with the focus on web analytics, before having to write my own frontends. (so graylog2 is not an option) This question is purely about log file based solutions.

    Read the article

  • Understanding the maximum hit-rate supported by a web-server

    - by SNag
    I would like to crawl a publicly available site (and one that's legal to crawl) for a personal project. From a brief trial of the crawler, I gathered that my program hits the server with a new HTTPRequest 8 times in a second. At this rate, as per my estimate, to obtain the full set of data I need about 60 full days of crawling. While the site is legal to crawl, I understand it can still be unethical to crawl at a rate that causes inconvenience to the regular traffic on the site. What I'd like to understand here is -- how high is 8 hits per second to the server I'm crawling? Could I possibly do 4 times that (by running 4 instances of my crawler in parallel) to bring the total effort down to just 15 days instead of 60? How do you find the maximum hit-rate a web-server supports? What would be the theoretical (and ethical) upper-limit for the crawl-rate so as to not adversely affect the server's routine traffic?

    Read the article

  • Is it good or bad to have dynamic content in page titles and/or description

    - by Gunjan
    In a local listing website, I append number of search results found in the description(not in title currntly) meta tag of the page as I think this is valuable for users for e.g. "Find address, phone numbers, blah blah blah for 21 outlets in locality. some more stuff after this..." as more places are added to the database, the description for the same page will change frequently. is this good or bad for SEO how about doing the same for title tags?

    Read the article

  • Photoshop, slice, Dreamweaver, web?

    - by Omega
    So I am playing around with Photoshop and Dreamweaver. I have created a site layout, and have used the slice function on it. Next, I saved as html & images. In Dreamweaver, I open such html file and I fill the page with content, links, etc. I have a website and everything, and I would like to use my newly created html page on it. But, obviously, if I copy & paste the html to my website it won't work because it will lack the images. But two things: I can't find the images, and apparently they are a lot. I am sure I am doing a great mistake regarding the images. Can someone help me?

    Read the article

  • Will many links to the same page without nofollow penalize the host site in the search engine rankings?

    - by Evgeny
    May be a silly question, but I'll give it a shot :). On my forum app I would like to allow users with sufficiently high reputation display links to their home pages under every post - without the nofollow attribute (while lower rep users will have the nofollow) I am happy to help the site contributors improve rankings of their own, but not sure if this can actually deteriorate the rank of the host (the site that hosts those links) - as potentially the same link to the user's home page may be peppered in the pages of the host. What do you think? Thanks.

    Read the article

  • How to open the console in different browsers?

    - by Šime Vidas
    Chrome: Press CTRL + SHIFT + I to open the Developer Tools. Click on the "Open console." icon in the bottom left corner. IE9: Press F12 to open the developer tools. Open the Script tab, click the "Console" button on the right. Firefox 4: Press CTRL + SHIFT + K to open the Web console. What about Opera 11 and Safari 5? Clarification: By console I mean the JavaScript console that lets you input and execute JavaScript code.

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • Google Fetch issue

    - by Karen
    When I do a Google fetch on any of my webpages the results are all the same (below). I'm not a programmer but I'm pretty sure this is not correct. Out of all the fetches I have done only one was different and the content length was 6x below and showed meta tags etc. Maybe this explains other issues I've been having with the site: a drop in indexed pages. Meta tag analyzer says I have no title tag, meta tags or description even though I do it on all pages. I had an SEO team working on the site and they were stumped by why pages were not getting indexed. So they figure it was some type of code error. Are they right? HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Encoding: gzip Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 11 Oct 2012 11:45:41 GMT Content-Length: 1054 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> <script type="text/javascript"> function getCookie(cookieName) { if (document.cookie.length > 0) { cookieStart = document.cookie.indexOf(cookieName + "="); if (cookieStart != -1) { cookieStart = cookieStart + cookieName.length + 1; cookieEnd = document.cookie.indexOf(";", cookieStart); if (cookieEnd == -1) cookieEnd = document.cookie.length; return unescape(document.cookie.substring(cookieStart, cookieEnd)); } } return ""; } function setTimezone() { var rightNow = new Date(); var jan1 = new Date(rightNow.getFullYear(), 0, 1, 0, 0, 0, 0); // jan 1st var june1 = new Date(rightNow.getFullYear(), 6, 1, 0, 0, 0, 0); // june 1st var temp = jan1.toGMTString(); var jan2 = new Date(temp.substring(0, temp.lastIndexOf(" ") - 1)); temp = june1.toGMTString(); var june2 = new Date(temp.substring(0, temp.lastIndexOf(" ") - 1)); var std_time_offset = (jan1 - jan2) / (1000 * 60 * 60); var daylight_time_offset = (june1 - june2) / (1000 * 60 * 60); var dst; if (std_time_offset == daylight_time_offset) { dst = "0"; // daylight savings time is NOT observed } else { // positive is southern, negative is northern hemisphere var hemisphere = std_time_offset - daylight_time_offset; if (hemisphere >= 0) std_time_offset = daylight_time_offset; dst = "1"; // daylight savings time is observed } var exdate = new Date(); var expiredays = 1; exdate.setDate(exdate.getDate() + expiredays); document.cookie = "TimeZoneOffset=" + std_time_offset + ";"; document.cookie = "Dst=" + dst + ";expires=" + exdate.toUTCString(); } function checkCookie() { var timeOffset = getCookie("TimeZoneOffset"); var dst = getCookie("Dst"); if (!timeOffset || !dst) { setTimezone(); window.location.reload(); } } </script> </head> <body onload="checkCookie()"> </body> </html>

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >