Search Results

Search found 9717 results on 389 pages for 'gkt pro'.

Page 192/389 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • PayPal - Account management

    - by Tom
    I'm running an app that gets small donations (Micro payments up to ~11 USD) and also I'm doing some freelancing where I get some higher payments over PayPal too. (~900 USD a month) Is it possible to have 2 accounts on PayPal? (I'm asking because if someone send me money for my freelancing, they get the contact information from the app - Like [email protected] instead of [email protected] ) Thanks.

    Read the article

  • what is the preparation of before apply for ads?

    - by cj333
    I have built a new site(finished last week, but no SEO, no clients). I know apply for ads need a long time. So I send my request to google adsense, and then do the SEO, send my site to web search. but yesterday I received an e-mail: Thank you for your interest in Google AdSense. Unfortunately, after a consideration of his request, showed that the It is not possible to accept your application to attend AdSense. what is the preparation of before apply for ads? Am I ignore something must do before apply for ads? And how to apply Google doubleclicks? Or other ads service? I am newer for apply for ads. can anyone recommend me some good ads service? Thanks.

    Read the article

  • 80,000 hits on irrelevant topic in 1-2 days [closed]

    - by John
    On 3rd of Nov 2012 somebody started this thread : http://forums.hostgator.com/advice-needed-new-account-t214566.html?t=214566 subject "Advice needed on new account". I visited it on 5th and when I saw the total views I was stunned to see figure of 80,000( while other threads were having views from 50-300). I even made a post using name of rag_gupta. Then I simply typed in Google in IE (I work on Firefox) : Advice needed on new account ---- and yes this same page was in no#1 position. Subsequently I tried to create similar wording article in Hubpages to see how it'd fare. Unfortunately it was not allowed to be published. Then I published in blogger.com. But hardly any hits. Considering only the first two posts in the thread what could have driven google to rank it highly?

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • Embeding a generic google search with autocomplete - not a custom site search

    - by picxelplay
    Most people's home page is google.com. My homepage is just a custom html page hosted on my computer. I do this because I am a web developer, and I have several projects that I work on a one time, so I like to have quick links to all of them. On that page I usually just have a Link to google.com for when I want to search. But below all of my quick links, I want to add a google search box (with Autocompletions). I first used a simple iframe to embed google.com into the page, but then my search results were confined to that iframe. I wanted to search for something, then my results would open in a new tab. I then came across this code snippet but it doesn't have Autocompletions: http://www.refactory.org/s/google_search/view/2 How can I add Autocompletions to this? Or is there a better way of doing it? Thanks in advance for any advice

    Read the article

  • How can I track hits to areas of my web application?

    - by Tyson
    We have a growing web application, and we currently use Google Analytics and Chartbeat to track usage and engagement (although we're open to alternatives). Unfortunately, both are geared towards content-based sites where everything is about the URL. Our URLs contain object IDs, making them less useful independently, and causing us to grow beyond Google Analytics' 50,000 unique URLs per day. How can we track hits to areas of our web application, essentially ignoring parts of the URLs?

    Read the article

  • Using MLP, how to make a link to the according page in the other languages?

    - by lyle
    Hi all, the question says it all, but here's a bit more detail: I help building a bilingual website using MLP on TextPattern. It's trivial to put a link to the top level page of another language, but how to put a link to the current page in another language? Eg. /en/contact should link to /de/kontakt (the same article in another language). I'm sure there are some variables somewhere that I could put into the template that would be filled with the correct links. Thankx in advance. :)

    Read the article

  • Tracking users behaviour - with or without Google Analytics

    - by Ilian Iliev
    If I understand correctly the following (point & from GA TOS): PRIVACY . You will not (and will not allow any third party to) use the Service to track or collect personally identifiable information of Internet users, nor will You (or will You allow any third party to) associate any data gathered from Your website(s) (or such third parties' website(s)) with any personally identifying information from any source as part of Your use (or such third parties' use) of the Service. You will have and abide by an appropriate privacy policy and will comply with all applicable laws relating to the collection of information from visitors to Your websites. You must post a privacy policy and that policy must provide notice of your use of a cookie that collects anonymous traffic data. You are not allowed to use custom variables that will identify the visitor(for example website username, e-mail, id etc.) So the question is how can I track a specific user behaviour(for example the actions that every single logged in user do).

    Read the article

  • Redisigning an old site, structure change etc

    - by RhymeGuy
    I have an old site built in 2006, it has around 200 pages and 500 pictures. Every single page is of course indexed as well as images. It is very well ranked for targeted keywords and I receive good amount of SEO traffic (I guess that's due the various campaigns, branding, ppc, etc..) Problem: Site has outdated design, pages and images have not so proper names, there are no heading and alt tags, it was built in tables, inline CSS etc.. Goal: Complete redisign site, use divs, change file names, add proper meta data, alt tags etc.. Question: How this can affect current SEO positions? I will redirect (301) every single page to the new one, build site map, but what to do with images? Do I need to redirect them also? Any other suggestion?

    Read the article

  • How do I remove these errors from my blog so as to get adsense approved?

    - by Serenity
    This is the question I asked on SO site earlier, but didn't get satisfactory replies. hoping to find a solution here.. http://stackoverflow.com/questions/12136796/how-can-i-detect-and-correct-these-errors-on-my-blog/12136829#comment16235061_12136829 In web master tools, apart from the errors in the question link above, it is showing a site map error too as in the screenshot below:- Need guidance please...thanks :) Edit -1 EDIT 2 I had 2 SEO plugins on my blog and I would put meta description for each of my article in both plugins that are All in One SEO and Yoast's "Wordpress SEO". Now I removed all article's meta descriptions from "All in one SEO" the other day but STILL web master tool is showing duplicate meta tags and descriptions. Why??

    Read the article

  • Issues With IIS Hosting Two Domains From Same Folder [closed]

    - by Bob Mc
    I have two different domain names that resolve to the same ASP.Net site. Both domains are hosted on the same server, which runs Windows Server 2003 and IIS6. The sites are differentiated in IIS Manager using host headers. However, both of the sites point to the same folder on the local drive for the site's page files. I am occasionally experiencing an ASP.Net error that says "The state information is invalid for this page and might be corrupted." I'm the site developer so I've addressed all the relevant code-related causes for this issue. However, I was wondering whether having two domains/sites sharing the same folder for an ASP.Net application might be causing this intermittent error. Also, is this generally a bad practice? Should I make separate, duplicate folders for each of the domains? Seems like that can become a maintenance headache.

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • How do I remove only some values of a URL parameter in Google Analytics?

    - by Iain Hallam
    I'm using Google Analytics on a DokuWiki site, which uses a URL parameter to decide what to do with the current page: /page is equivalent to: /page?do=show 1) I want to see some of these "modes", but mostly I'd like them counted as viewing the bare page URL itself. The following are the only ones I want to see separately: /page?do=login /page?do=backlinks /page?do=revisions /page?do=subscribe How do I collapse the unwanted modes to the page itself (/page)? 2) Some modes do something that should really not have a page attached, such as: /page1?do=sitemap /page2?do=sitemap How do I get these to show up without the page part (/?do=sitemap)? 3) What do I do with the search mode? Can I remove the page part from this too, and still find out which page people used the search function on? /page?do=search&id=query+text

    Read the article

  • Private domain purchase with paypal: how to prevent fraud?

    - by whamsicore
    I am finally going to buy a domain I have been looking at. The domain owner wants me to give him my Godaddy account information and send him the payment via Paypal gift, so that there will be no extra charges. Should this cause suspicion? Does Paypal offer any kind of fraud protection? What is the best way to protect myself from fraud in this situation, without the need for escrow services, such as escrow.com? Any advice welcomed. Thanks.

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • auto tabbing not working on iphone

    - by Sarita
    I have problem with auto tabbing on Iphone or android. This auto tabbing code work perfectly on each browser of pc but not on mobile. please help me. its urgent. $(document).ready(function() { WireAutoTab('<%= PartOne.ClientID %', '<%= PartTwo.ClientID %', 3); WireAutoTab('<%= PartTwo.ClientID %', '<%= PartThree.ClientID %', 2); }); function WireAutoTab(CurrentElementID, NextElementID, FieldLength) { //Get a reference to the two elements in the tab sequence. var CurrentElement = $('#' + CurrentElementID); var NextElement = $('#' + NextElementID); CurrentElement.keyup(function(e) { //Retrieve which key was pressed. var KeyID = (window.event) ? event.keyCode : e.keyCode; //If the user has filled the textbox to the given length and //the user just pressed a number or letter, then move the //cursor to the next element in the tab sequence. if (CurrentElement.val().length >= FieldLength && ((KeyID >= 48 && KeyID <= 90) || (KeyID >= 96 && KeyID <= 105))) NextElement.focus(); }); }

    Read the article

  • Add copyright notice to a website

    - by PeeHaa
    Not really a programming question, but I find it related. If not (or you find this question too subjective), please tell me, yell at me, swear at me, kick me in the nuts, or just vote to close :) I've read some questions and answers here on SO about adding copyright notices, but not the specific ones I am looking for. I want to add a copyright notice to a website I created. Something like (c) Me 2010. All rights reserved. I am aware that everything written by someone is automatically copyrighted (if I'm not mistaken and perhaps depending on country laws). I see some sites use the following format for this (c) Me 2009-2010. However for me that makes no sense to add an 'end-date' to the notice. I am aware I can code to update the notice every year, but I just find it strange. Or is it just me? Another question is: I also use copyrighted code from others (they are all mentioned in the credits incl. links to their licenses ofc) on my site. Would it still be OK to add the copyright notice to the site with only Me in it? So to sum it up I have 2 questions: What is THE RIGHT WAYTM of adding a copyright notice on a website (or code or whatever)? If there is one. Is it allowed to copyright code with other copyrighted code within it?

    Read the article

  • Awesome WordPress modular theme builder?

    - by Matt M.
    I came across a really neat modular theme for WordPress a while ago, but I can't remember the name so I'm wondering if somebody can help out. It was a paid theme with a dedicated company behind it if I'm not mistaken. The theme allowed you to change the colors of all the content areas, define widgets, and resize content areas. I'm really interested in being able to resize columns and such at the touch of a button, getting too old for the usual CSS micromanagement that goes on with brittle themes.

    Read the article

  • Rendering citations and references in HTML using PHP/Perl/Python/

    - by Nick
    Is there a PHP/Perl/Python/... library for picking citations out of an HTML file and rendering a nice list of references at the bottom, like in Wikipedia? I'm developing a website with heavily-sourced content, and I'd really like to have automatically-generated lists of formatted references, like in Wikipedia. (Check out their philosophy page, and see how the superscript numbered citations interact with the references at the bottom. This is all dynamically generated, automatically ordered & linked.) They do it really well: the citations are linked to the references (which are backlinked to the citations), when you click on one of the links, the target is highlighted, etc. I'm tempted to build the site on MediaWiki just for this one feature, but it seems like overkill. Do I have any options?

    Read the article

  • Still prompted for a password after adding SSH public key to a server

    - by Nathan Arthur
    I'm attempting to setup a git repository on my Dreamhost web server by following the "Setup: For the Impatient" instructions here. I'm having difficulty setting up public key access to the server. After successfully creating my public key, I ran the following command: cat ~/.ssh/[MY KEY].pub | ssh [USER]@[MACHINE] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys" ...replacing the appropriate placeholders with the correct values. Everything seemed to go through fine. The server asked for my password, and, as far as I can tell, executed the command. There is indeed a ~/.ssh/authorized_keys file on the server. The problem: When I try to SSH into the server, it still asks for my password. My understanding is that it shouldn't be asking for my password anymore. What am I missing? EDIT: SSH -v Log: Macbook:~ michaeleckert$ ssh -v [USER]@[SERVER URL] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: Connecting to [SERVER URL] [[SERVER IP]] port 22. debug1: Connection established. debug1: identity file /Users/michaeleckert/.ssh/id_rsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_rsa-cert type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze3 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze3 pat OpenSSH_5* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA [STRING OF NUMBERS AND LETTERS SEPARATED BY SEMI-COLONS] debug1: Host ‘[SERVER URL]' is known and matches the RSA host key. debug1: Found key in /Users/michaeleckert/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/michaeleckert/.ssh/id_rsa debug1: Trying private key: /Users/michaeleckert/.ssh/id_dsa debug1: Next authentication method: password [USER]@[SERVER URL]'s password: debug1: Authentication succeeded (password). Authenticated to [SERVER URL] ([[SERVER IP]]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Welcome to [SERVER URL] Any malicious and/or unauthorized activity is strictly forbidden. All activity may be logged by DreamHost Web Hosting. Last login: Sun Nov 3 12:04:21 2013 from [MY IP] [[SERVER NAME]]$

    Read the article

  • Hosting a website from a dynamic IP

    - by nick
    I recently upgraded my internet to the point that it is much faster and more reliable than my current webhost. I would like to move my current domain to be hosted at home, but my IP address is dynamic. As far as I know, I only get a new IP when I restart my modem and or router (which is almost never) or when cable one (my ISP) pushes out a firmware update (rarely). There are a few ways I can see doing this 1) convince my ISP to give me a static IP 2) assign my router my current IP to force a static IP (which might work?) 3) set my dns record to my current IP address and update it on the rare occasions that it changes. Obviously I'm hoping that the first one works, but I don't want to pay a lot of extra money (if that's what it takes) to get a static IP address. Has anyone had any luck with something like that?

    Read the article

  • For an inexperienced VPS administrator, is Nginx a suitable alternative to Apache?

    - by James
    I couldn't think of the best way to set the title, so if somebody wants to edit it to something more appropriate, I'd be grateful ;) I'm what I would consider to be an inexperienced user/ administrator when it comes to running my VPS. I can get by with a few CLI commands, I can set up Webmin and I can set up Yum repos, but beyond the very basic stuff, I'm out of my depth. So far, I'm running Apache. I don't know it particularly well, but I can get by with editing httpd.conf if I'm told what to edit. I've heard good things about Nginx and that it's not as resource-hungry as Apache. I'd like to give it a go, but I can't find any information about its suitability for administrators like me, with little experience of sysadmin or web server config. Webmin now has support for Nginx, so getting it installed and running probably won't be too much of a problem. What I'm wondering is, from a site adminstrator perspective, is running Nginx as transparent as running Apache? IE, at the moment, I can just throw up Wordpress and Drupal sites without having much to worry about or having to make any config changes to Apache. Would Nginx be as transparent?

    Read the article

  • Is it possible to have multiple subdomains point to the same Blogger blog?

    - by cclark
    For our application we want to have a status page which is hosted outside of the rest of our infrastructure so in case there are issues in our data center we can post updates for our users and our users will be able to access them. We registered a blog on Blogger and set it up with xyzstatus.blogspot.com and status.xyz.com. Everything seems to work fine. We need to perform some maintenance at our datacenter which will sever all connectivity so we're unable to have a redirect using nginx or apache. We'd like to do this with a short TTL CNAME DNS entry. Ideally www.xyz.com and app.xyz.com could be CNAMEd to status.xyz.com. When I setup the CNAME and go to that URL I get a Google broken robot 404 page. I figure I must need to let Google know it should associate traffic to www.xyz.com and app.xyz.com to the blog served up by status.xyz.com. But I can't see anywhere to do this in Blogger. Does anyone know if this is possible?

    Read the article

  • How do I redirect a FQDN to an internal URL?

    - by Dave
    We have internal DNS servers where we've registered a FQDN that resolves internally to identityreg.domain.com. We also have an existing web page at https://iamserver.domain.com/product/default.asp?Workflow=process1. We need our users to be redirected to the existing web page URL whenever they type identityreg.domain.com. We're using IIS for the web server. I'm a newbie here so forgive any misuse of terms. How do I get the FQDN to redirect to the URL?

    Read the article

  • SEO: best way to deal with short lifetime URLs?

    - by Mike Norgate
    I am currently in the process of redesigning a job advert site and am trying to put a lot more effort into my SEO. My question is how should I deal with the URLs that point to job adverts when the advert expires. The options I have thought of so far are: Return a 404 error and redirect to a 404 page. Will it have an effect on ranking if there are a lot of URLs that return 404s after only being up for a few weeks? Redirect to job listing page - When the user requests a URL for an advert that has expired just redirect to the main job listing page. Show the advert but tell the user to has closed - Show the advert page but with a notification that the advert has closed. The issue I see with this is that the user will visit the page, see its closed and then leave the site again which would not be good for rankings

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >