Search Results

Search found 14053 results on 563 pages for 'upk pro (knowledge pathwa'.

Page 156/563 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Creating advanced website by redirecting and replacing content from Google Sites

    - by David
    I would like to create a corporate website with members area. Importantly, I want many novice webadmins to be able to modify static content themselves. Therefore, I got the idea to create the site using Google Sites and insert elements with width and height in places where I want dynamic content. The website would be read using PHP on a different server and the marker elements would be replaced with dynamic content created by PHP. What would be the drawbacks of this approach?

    Read the article

  • Republishing blog posts on a popular website

    - by Giorgi
    I started my blog about programming yesterday and in order to promote and increase traffic I submitted my rss to Codeproject which pulls my posts and publishes them at Codeproject. While it increases the number of people reading my posts (but they are reading it at codeproject) I am worried that Google will penalize my site for duplicate content (Especially considering that Codeproject has much more reputation compared to my new website). The post at Codeproject has a link back to my blog post but it does not have "rel=canonical". So my question which one is better: a link from a high reputation website and some traffic or should I remove it from codeproject so that my blog is not penalized? What if codeproject adds "rel=canonical" to the link?

    Read the article

  • How do I allow e-mail to be relayed through this MTA?

    - by BlueToast
    When I try to send an e-mail using authenticationless relay via telnet, I receive an error message "553 sorry, that domain isn't allowed to be relayed thru this MTA (#5.7.1) rcpt to:[email protected]". How can I allow a specific domain to be whitelisted and allowed through the MTA? There is only one domain I am trying to relay e-mails to (and that domain uses a totally different, independent and standalone mail server with IceWarp). 220 mail4.myhsphere.cc ESMTP ehlo sisterwebsite.com 250-mail4.myhsphere.cc 250-PIPELINING 250-8BITMIME 250-SIZE 41943040 250-AUTH LOGIN PLAIN CRAM-MD5 250 STARTTLS mail from:[email protected] 250 ok rcpt to:[email protected] 553 sorry, that domain isn't allowed to be relayed thru this MTA (#5.7.1) rcpt to:[email protected] 553 sorry, that domain isn't allowed to be relayed thru this MTA (#5.7.1) rcpt to:[email protected] 553 sorry, that domain isn't allowed to be relayed thru this MTA (#5.7.1) rcpt to:[email protected] 250 ok data 354 go ahead To: [email protected] From: [email protected] Subject: Test mail -- please ignore Test, please ignore this Jane Sincerely, BlueToast . 250 ok 1350407684 qp 22451 quit 221 mail4.myhsphere.cc Connection to host lost. C:\Users\genericaccount Not sure what to do. I did some Googling but I'm having a hard time finding relevant results. Most of the search results I get are about trying to receive mail -- but I am trying to send mail. mail.sisterwebsite.com = mail4.myhsphere.com. We use FluidHosting for the e-mail on sisterwebsite.com. (Repeating question just in case) How can I allow a specific domain to be whitelisted and allowed through the MTA?

    Read the article

  • How to properly remove URL's from Google's index?

    - by ElHaix
    On some of our sites, we now have several thousand pages that dilute our website's keyword density. The website is an MVC site with SEO routing. If I submit a new sitemap with say only the 2000 or so pages that we want indexed, even though navigating to the diluting pages still works, will Google re-index the site with only those 2000 pages, dropping the superfluous ones? For example, I want to keep roughly 2000 of the following: www.mysite.com/some-search-term-1/some-good-keywords www.mysite.com/some-search-term-2/some-more-good-keywords And remove several thousand of the following that have already been indexed. www.mysite.com/some-search-term-xx/some-poor-keywords www.mysite.com/some-search-term-xx/some-poor-more-keywords These pages are not actually "removed" as navigating to these URL's still renders a page. Even though there are potentially hundreds of thousands of pages, I only want say 2000 to be re-indexed and retained. The others removed (without having to do these manually). Thanks.

    Read the article

  • You don't have permission to access /index.php on this server

    - by Tran Dinh Thoai
    I made a 'login with OpenID' page and I had a error when OpenID provider return to my page: You don't have permission to access /index.php on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. If I remove parameters which are returned by OpenID provider, the page run well. How can I fix this problem? The login page that cause error is: http://bryox.com/login

    Read the article

  • schema.org 'reviewRating' tag not recognized by google snippet testing tool

    - by saravanak
    I'm trying to add more structural information to my webpages by using the microdata format suggested in www.schema.org. The procedure seems straight forward but I'm having issues validating my results in the Google Rich Snippets Testing Tool. Check out this review page, here I'm using the 'reviewRating' property item to specify rating values for that particular review. I followed the same format as defined in schema.org/Rating but this markup fails validation in Google's rich snippet testing tool with the following error info. Item Type: http://schema.org/rating reviewrating = 5 ratingvalue = 5 Warning: Property "reviewrating" was not found.

    Read the article

  • Understanding the maximum hit-rate supported by a web-server

    - by SNag
    I would like to crawl a publicly available site (and one that's legal to crawl) for a personal project. From a brief trial of the crawler, I gathered that my program hits the server with a new HTTPRequest 8 times in a second. At this rate, as per my estimate, to obtain the full set of data I need about 60 full days of crawling. While the site is legal to crawl, I understand it can still be unethical to crawl at a rate that causes inconvenience to the regular traffic on the site. What I'd like to understand here is -- how high is 8 hits per second to the server I'm crawling? Could I possibly do 4 times that (by running 4 instances of my crawler in parallel) to bring the total effort down to just 15 days instead of 60? How do you find the maximum hit-rate a web-server supports? What would be the theoretical (and ethical) upper-limit for the crawl-rate so as to not adversely affect the server's routine traffic?

    Read the article

  • Cheaper alternatives to 99Designs.com (outsource CSS design)

    - by Chris Smith
    I'm designing my own website as a side project and I want the site to look professional. (Read, not designed by a programmer.) I don't mind spending a little money to have a professional do it, but design sites like 99designs.com cost way to much. (~$500+) Is there a cheaper (~$100 - $200) alternative for getting a designer to improve an existing site? (Things like updating the CCS or suggesting better ways for laying out the navigation.) Or is my best bet trying to pick up a freelancer on Craigslist?

    Read the article

  • Google Goals process not working through similarly named pages

    - by David
    Well, I'm at a loss. I've ensured that my tracking script is in etc etc, and I've set up my goal and funnel path, but only the first step is ever being shown on the funnel. Goal URL: /checkout/checkoutComplete/ Type: Head Match ... but should this be /checkout/checkoutComplete/(.*) and set to regex rather because there are parameters after the main part of the URL (I thought that's what head match was for) Step 1: /checkout/ <-- required Step 2: /checkout/confirm/ both the above are valid and correct URLs for my domain. But for some reason, the funnel visualization shows entries into the first step, then an exits count that matches the entry count, including /checkout/confirm - but it doesn't go on to the next step! Perhaps I'm doing something obviously wrong...but I can't quite see it? Also, semi-related questions. Making a change to the funnel, does it only affect new incoming data? And how often does it update? Thanks in advance for your help.

    Read the article

  • Why am I seeing unexpected requests for "crossdomain.xml" in my logs?

    - by Bogdacutu
    I've getting lots of 404 errors from crossdomain.xml. Here are the request details, as provided by Google App Engine: 404 22ms 19cpu_ms 0kb Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30 69.130.*.* - - [24/Jul/2011:07:43:42 -0700] "GET /crossdomain.xml HTTP/1.1" 404 124 "http://s.nsdsvc.com/App/DddWrapper.swf?c=3" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30" "app.*.*.*" ms=22 cpu_ms=19 api_cpu_ms=0 cpm_usd=0.000633 instance=00c61b117c557326bef77d341a345431e66b I'm not sure what is going on. Can anyone help me solve this issue?

    Read the article

  • How to 301 redirect from old query string urls to CakePHP Canonical urls?

    - by Daniel Bingham
    I currently have a .htaccess file that looks like this: RewriteCond %{QUERY_STRING} ^action=view&item=([0-9]+)$ RewriteRule ^index\.php$ /index.php?url=item/%1 [R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] It is meant to 301 redirect my old query string based URLs to new CakePHP urls. This will successfully send users to the correct page. However, Google doesn't seem to like it (see below). I previously tried doing this: RewriteCond %{QUERY_STRING} ^action=view&item=([0-9]+)$ RewriteRule ^index\.php$ /item/%1 [R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] But that fails. The second rewrite rule doesn't seem to catch the rewritten URL. It goes straight through. Using the first version wouldn't be a problem, except that I suspect that is what is choking up Google. It hasn't indexed my sitemap full of the new URLs. My old sitemap had been fully indexed and all the URLs are in Google's index. But it isn't following the redirects from the old URLs to the new. I have a 'not followed' error for every one of the query urls that was in my old sitemap. Am I properly using a 301 redirect here? Is it the weird rewrite rule? What can I do to send both Google and users to the proper page and save my page rank?

    Read the article

  • Can I import an existing member data used in old ASP to a new ASP.NET membership database? [closed]

    - by Rick Brown
    I have an old website that I designed and still maintain using old ASP that has a membership database (MS-SQL) that I built from scratch. It is a very simple database that has all the user information in one table (including login info and personal info) and then details and other odds and ends in other tables. It is WAY past time to upgrade this to .NET, especially since I need to add a Paypal payment system into it as soon as I can. I've designed several other sites with membership in .NET, but they have all been from scratch. Is there an easy way to transition from the old ASP site to a new .NET membership database without losing the data? There are hundreds of users with thousands of records relating to those users that I'd rather not lose, if possible. Any ideas on a relatively painless way to do this?

    Read the article

  • Google displaying swf menu in SiteLinks

    - by m90
    I have a website that uses a Flash based menu as its main navigation. A plain HTML fallback version is "lying underneath", the swf is embedded using swfobject. swfobject.embedSWF('MENU.swf', 'menu', '1000', '600', '8.0.0', 'ext/expressInstall.swf', {}, {wmode:'transparent',bgcolor:'#666666'}, {}); Somehow Google now started displaying a link to the swf-file in the SiteLinks (noting [SWF] beforehand) which is pretty ugly as the Flash content gets all scrambled and all you see is a random string of characters and numbers (it looks "hacked" to me, although I do know it is not). Also, the link to the swf is plain useless as it relies on JavaScript-functions in the HTML-document. I already demoted the swf in the Webmaster Tools, yet in some situations the link will still show up. Is anyone aware of this problem (I haven't found too much on this on the Internet) and knows how I can keep the search results from linking to the swf?

    Read the article

  • adding regular expression in php not work

    - by John Smiith
    Code i added ([a-zA-Z0-9\_\-]+) but not work i wan't to include all css files is there is any other way to add?? My code file css.php header("Content-type: text/css"); $css = array( '([a-zA-Z0-9\\_\\-]+).css', ); foreach ($css as $css_file) { $css_get = file_get_contents($css_file); echo $css_get; } call.php <link href="css.php" rel="stylesheet" type="text/css" /> i wan't to rewrite css.php to css.css so public can see css.css instead of css.php. how can i do that using php script?

    Read the article

  • What is the best way to promote a programming blog?

    - by paul
    (The guys from 'Programmers' referred me here...) How do you promote your programming blog? I recently started http://blackforestcoder.blogspot.com/ to record my progress working with new technologies and ideas. The main aim being to provide a list of pitfalls and solutions and also to get feedback from readers. Since I set it up 10 days ago I have only had about 2-3 hits even though Google is supposed to be indexing it. How might I boost the hit rate?

    Read the article

  • Google sitemap HrefLang tag without the main site url

    - by Rashmi Pandit
    We have websites with multilingual content. e.g. http://www.example.com/about-us/ http://www.example.com/en-HK/about-us/ http://www.example.com/en-GB/about-us/ http://www.example.com/zn-CH/about-us/ We need to configure the hreflang tags in sitemap for Google to know that there are alternate links for the same pages in different languages. I know for the above example that my sitemap url tag would look like this: <url> <loc>http://www.example.com/about-us</loc> <xhtml:link rel="alternate" hreflang="en-GB" href="http://www.example.com/en-GB/about-us"/> <xhtml:link rel="alternate" hreflang="en-HK" href="http://www.example.com/en-HK/about-us"/> <xhtml:link rel="alternate" hreflang="zn-CH" href="http://www.example.com/zn-CH/about-us"/> <changefreq>daily</changefreq> <priority>0.8</priority> </url> However, if I don't have the main url but just the last three ones with en-HK, en-GB and zn-CH, then how should my url tag look? Should I just skip the loc tag and keep the three xhtml:link tags? Or can I specify any url in the loc tag and put the remaining two in xhtml:link tags? I am new to Google sitemaps. Any help is greatly appreciated. Thanks, Rashmi Edit: From the answer posted on http://stackoverflow.com/questions/18423624/sitemap-for-domain-with-multilanguage-site/18423803#18423803, for my example with sites in en-HK, en-GB and zn-CH, should there be three url tags, with each of them assigned to loc with the other two in xhtml:link?

    Read the article

  • I want to host clients' websites, but not their email. What's the easiest way to handle this?

    - by Phil
    My company lets non-technical users build their own niche industry websites on our server, which we host. they can currently point their nameservers at their registrar to us, which ends up with them no longer having access to their email if they've already set it up through said registrar. We don't want to interfere with their existing email, nor do we want to get into the business of setting up email for them through our service. Thus, having them point A records/cname to us would work, but is this too complex for a non-technie user? We thought of having them point nameservers to us but pointing the MX records back to them, but this is also beyond their scope. Is there an easy way to 'point records' at their initial state? Any other ideas/feedback?

    Read the article

  • Wordpress Multisite and Google Analytics in subfolders with mapped domains

    - by David
    I have a wordpress multisite with sub folders. The site's subfolders are mapped to domains, which are set to primary. I'm using the 'Google Analytics Multisite Async' code to track things. From what I can see it's tracking the sites fine (getting page hits for each site in google analytics) baring the original site in the Multisite which in content overview lists domains then the amount of traffic it's getting along with the orginal domains traffic. I don't want to track any other traffic for my orginal site than what goes to that. i.e. I don't want it tracking my other sites in multi-site. e.g. domain1.com is my orginal and I have lots of other sites in the multisite lets say domain2.com, domain3.com. In content overview in Analytics it's listing say domain2.com as content. Can I tell it to filter these out some how either in Analytics or within WordPress? Hopefully explained that clearly!

    Read the article

  • Best way to provide folder level 301 redirect

    - by Vinay
    I have a website hosted in yahoo small business, I don't have access to .htaccess file. I have around 220 pages in a folder 'mysubfolder' (http://mysite.com/myfolder/mysubfolder). And the age of website is around 3 years. I am planning to move all 220 pages in 'mysubfolder' to 'myfolder' (one level up). All the pages are under 'mysubfolder' are indexed. what is the best way to do this.So that it should not affects the SEO.

    Read the article

  • Does anyone know any good resources for learning how to market a web app?

    - by Jack Kinsella
    I'm a developer first and foremost. I write web apps but have a hard time generating traffic and converting potential users once I've released my product into the wild. I know I need to learn more about marketing but I don't know where to start as I've no baseline to judge the quality of the materials I stumble across. Does anyone know any websites, blogs, e-books or other resources for learning how to market effectively?

    Read the article

  • Does having a website inside a frame (<frameset>) helps or affect search engine rankings?

    - by rajesh.magar
    I have been working to promote my website from long time but not getting such traffic as work I have done on that. My website is running online with another domain using framset so is it somewhere affecting on search index & ranking. My parent website is http://www.battle cancer.com and using <frameset frameborder=0 framespacing=0 border=0 rows="100%,*"noresize> <frame name="frame" src="http://www.battle-cancer.com" noresize></frameset> It running online with the http://www.elimaysupplements.com/.

    Read the article

  • Best practice for bulk eCommerce product upload?

    - by Or W
    I'm thinking about opening a large online store for Jewelry, the one thing that really bothers me is managing the actual operation of taking pictures, uploading and describing all the products. I'm trying to figure out the best way to do it, in terms of performance or the least time consuming. Just a few things to keep in mind I'll have over 1,000 items in the online store I'll have 3-4 pictures for each item, I'm using a DSLR camera if it makes any difference. I'm going to probably use Magento, unless you have better experience with another eCommerce platform that will help me get this done quickly. I'll need to randomly(?) create a product code for each item.

    Read the article

  • How to secure robots.txt file?

    - by CompilingCyborg
    I would like for User-agents to index my relative pages only without accessing any directory on my server. As initial thought, i had this version in mind: User-agent: * Disallow: */* Sitemap: http://www.mydomain.com/sitemap.xml My Questions: Is it correct to block all directories like that - Disallow: */*? Would still search engines be able to see and index my sitemap if i disallowed all directories? What are the best practices for securing the robots.txt file? For Reference: Here is a good tutorial for robots.txt #Add this if you want to stop Alexa from indexing your site. User-agent: ia_archiver Disallow: / #Add this to stop duggmirror User-agent: duggmirror Disallow: / #Add this to allow specific agents User-agent: Googlebot Disallow: #Add this to allow all agents while blocking specific directories User-agent: * Disallow: /cgi-bin/ Disallow: /*?*

    Read the article

  • Why my domain redirect on Google Apps is returning 404?

    - by Tom Brito
    I have a configuration in the Google Apps Control Panel (dcc.securepaynet.net) to redirect <tombrito.com to <http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4499244H9. It worked fine until some days ago, but now it's returning 404. If you access tombrito.com you can see the favicon in the title of the browser tab, but the page shows a 404 error. The target page <http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4499244H9 is fine, it's only some problem with my redirect. Any idea what's wrong here?

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >