Search Results

Search found 9763 results on 391 pages for 'ys pro'.

Page 59/391 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Formatting HTML lists using CSS

    - by pwaring
    I'm trying to recreate list in HTML which has clauses and subclauses like this: 1. Main Clause (a) Sub clause (b) Sub clause 2. Another main clause (a) Sub clause The problems I'm running into are: If I use the existing HTML elements (ol and li) there doesn't seem to be a list style for (a) - I can have a. b. c. or A. B. C. but not (a) (b) (c). If I don't use the existing HTML elements and start using span tags, then if a subclause runs beyond the end of the line it appears underneath the clause number, rather than being indented. Like so: (a) Very long subclause which goes over one line when what I really want is the behaviour from lists, which is: (a) Very long subclause which goes over one line Is there any way to get round these two problems at the same time? I'd prefer to use semantic HTML and CSS for styling, but having the clauses spaced correctly is more important than doing things 'the right way'. I may need subsubclauses at some point (i.e. (i), (ii) etc.), so I can't assume that (a) will be the maximum clause depth.

    Read the article

  • intermittent DNS problems

    - by jemminger
    How would I go about tracking this issue down? One of my websites has been up for years without issue. We're using Godaddy's nameservers for our domain. Lately in the past two weeks, I've noticed that once from home on my Cox cable connection, I could not connect to the site... it said the domain could not be resolved. I checked downforeveryoneorjustme.com, and it said it was fine. The "outage" lasted maybe five minutes (through a reboot too, and I'm on a mac FWIW) and then it started working again. Then it happened again this week, but from our office on a different Cox connection. Then it happened again from the office, but for a different domain. I called Cox during one outage, and the tech there could resolve the domain without a problem. When these outages are occurring, I can issue "host mydomain.com" and get "domain not found", but using "host mydomain.com 8.8.8.8" will resolve normally. Where do I start? We're getting reports now that our customers are experiencing it too.

    Read the article

  • Apache mod_rewrite for multiple domains to SSL

    - by Aaron Vegh
    Hi there, I'm running a web service that will allow people to create their own "instances" of my application, running under their own domain. These people will create an A record to forward a subdomain of their main domain to my server. The problem is that my server runs everything under SSL. So in my configuration for port 80, I have the following: <VirtualHost *:80> ServerName mydomain.com ServerAlias www.mydomain.com RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule /(.*) https://mydomain.com/$1 [R=301] </VirtualHost> This has worked well to forward all requests from the http: to https: domain. But as I said, I now need to let any domain automatically forward to the secure version of itself. Is there a rewrite rule that will let me take the incoming domain and rewrite it to the https version of same? So that the following matches would occur: http://some.otherdomain.com -> https://some.otherdomain.com http://evenanotherdomain.com -> https://evenanotherdomain.com Thanks for your help! Apache mod_rewrite makes my brain hurt. Aaron.

    Read the article

  • Why does display:table-cell not center content without display:table?

    - by Samuel
    I'm looking for the most efficient (or elegant) way to vertically and horizontally center content of variable height. I've come up with this: http://cssdeck.com/t/2veysdkg/16, which uses css tables to vertically center the main content. My demands for writing this particular piece of code were: Must be able to center variable and fixed width content vertically and horizontally Centered content must be inside the normal document flow (so no overlapping) Sticky footer and normal header of 100% width As few hacks, ugly code or non-semantic html as possible I didn't care about support for IE6, IE7 (I'll use a different stylesheet for them) The weird thing is that the demands above are only met when the header and footer are set to table-row, and the body-tag to display:table. Which is weird because as I understand it the css will generate anonymous table elements when parent table elements are missing. So table-cell should work without all the surrounding elements, but yet I've not been able to make it work. If it were up to me I would prefer to not mess with the display mode for the body tag, and leave the header and footer on display:block. But I've not been able to make it work. Does anyone understand why this doesn't work?

    Read the article

  • PHP Development Environment (Host: Windows 7, Guest: Ubuntu)

    - by Kristian Leiws Jones
    Since editing files live from a remote server slows down development. I use XAMPP on windows to develop then run the web app's on a Linux server. However to avoid environment dependencies I'd like to mirror the live environment and the development environments. What I'm asking is running development server on Ubuntu inside VirtualBox whilst editing the source files via ftp/Dreamweaver is a good idea? If so, and I wanted to view the local website on the host OS (windows) how would I do this? does the guest OS have a LAN/Local IP address? I notice on windows "ipconfig /all" there are "tunneling" adapters which I assume is for VirtualBox, so I guess the guest OS has the same LAN/Local IP address? if so how would I view the websites hosted on the guest OS on the host OS? I'd also need to host FTP server on guest OS. Note: I need windows! I would love to use Linux all the way -.-

    Read the article

  • cURL works but PHP cURL fails to internet [migrated]

    - by wrk2bike
    Trying to diagnose an issue using PHP to cURL to an Internet location on a RedHat Linux server. cURL is installed and working, and: <?php var_dump(curl_version()); ?> shows all the correct information in the output. The issue is I can use PHP to cURL to localhost on the box itself, but not the Internet (see below). Normally I'd suspect the firewall, but I can cURL from the command line to the Internet without a problem. The box can also update it's own software packages, etc. What am I missing? My test is: <?php function http_head_curl($url,$timeout=30) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_TIMEOUT, $timeout); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $res = curl_exec($ch); if ($res === false) { throw new RuntimeException("cURL exception: ".curl_errno($ch).": ".curl_error($ch)); } return trim($res); } // Succeeds, displaying headers echo(http_head_curl('localhost')); // Fails: echo(http_head_curl('www.google.com')); ?>

    Read the article

  • Falcoda Sub Domains [on hold]

    - by harry_p_6
    I run a website on Falcoda, using home package, but the option to edit sub domains has gone. Please help! I have already setup some using the hosting console previously, but it was updated and now the link is gone. I found that you can use web.hostingconsole.info/sub-domains.cgi , but it says I can have 0 sub domains and I currently have -5 left. On the home package it says you have unlimited. Is this a problem with mine only, or is this up with everybodys?

    Read the article

  • Tracking pages with variables in GA

    - by Imran
    Recently I have updated my site, it now passes a variable on some links like so... www.mysite.com/1234/?play=true I've noticed in Google Analytics it records www.mysite.com/1234/ and www.mysite.com/1234/?play=true as two different URL's. Is there a way to merge them because they are after all just one page, It makes "Top Content" for example hard to read because of dupilicates. I've read about something called canonical link tag which may help this? My blog has this already inserted into the head but it doesnt make a difference. Any suggestions?

    Read the article

  • Forwarding a subdomain to main domain using Godaddy.

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • Trying to Host Server for External Access - Apache, VirtualBox & Portforwarding

    - by Tspoon
    Banging my head on the wall at this stage.... trying to host my Apache site on Ubuntu 12.10 with VirtualBox. Running Windows 8 host. Things I've done: Ensured Apache is listening on ports 80, 443 and 8080 (for thoroughness) tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3355/httpd tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 3355/httpd tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3355/httpd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 681/sshd VM is using bridged network connection Assigned a static IP to my Ubuntu VM, which can be accessed fine from within network. Forwarded TCP ports 80, 8080, 443 on the static IP of VM on my router Given my VM a static NAT Address Turned off Ubuntu firewall and router firewall Read on forums that my ISP (Eircom) allow port 80 to be used And I still can't access my site using the WAN/External IP (checked internally and using CanYouSeeMe.org). It says all the ports I mentioned are closed. I'm really at a loss of what to try next... Am I missing something silly here? Note: I haven't assigned a static IP address within the router, on within the VM. And DHCP server is enabled. Is that bad?

    Read the article

  • Adding an ASP website in IIS7.5 on Windows 7

    - by birdus
    enter preformatted text hereI'm trying to add an ASP website under IIS 7.5 on Windows 7 and am having no luck so far. This site is just for me to hit locally. I need to make some changes to some of the HTML in some of the ASP files and I just need to be able to test my changes as I make them. I installed IIS and checked the box for ASP. Next, I added an Application Pool which I called ASP and which has "No Managed Code" and "ASP" set. Next, I added the website by right-clicking "Sites" then clicking "Add Web Site...". I gave it a name, set it to use the ASP app pool, pointed it to the path where the ASP code is (I left it at pass-through authentication), and typed in 5555 as the port, so as to not interfere with the default website. The code is sitting on my server and the path simply uses the mapped drive that I always use to access files on that drive array. When I type in http://mysite:5555, I get "could not find mysite:5555". I don't really know if all these settings are correct or what else I should try. What am I missing? Thanks, Jay

    Read the article

  • Is traditional JavaScript image pre-loading taboo

    - by Evan Plaice
    I remember the good-old-days (not really) back when I was still sucking the teet of Dreamweaver to build websites and the lure of playing copypasta with fancy built-in scripts (ex, image-swap) was like black magic. I'm pretty far removed from that now days but I was adapting a small site from it's original FrontPage (::cringe::) format to a standard HTML/CSS implementation and couldn't help wondering... should I should re-implement the JavaScript image pre-loading into the current version? Or, is there a better way? I don't want to block the page from loading by requiring the user to request all the assets withing the page by using the traditional JavaScript pre-loader method. I value giving the user something to look at ASAP, and there's some potential harm to my Google mojo by doing so. Is there a cleaner solution to prevent unnecessary page-reflows during loading? Such as, setting the static width/height dimensions through a CSS style attribute on the image element.

    Read the article

  • CSS Intelligent Merger

    - by BHare
    I am looking for a tool very similar to http://www.tothepc.com/archives/combine-merge-multiple-css-files/ However, given this example: test1.css: #admin { background: #c9d2dc; border-color: #ccc } test2.css: #admin { background: #222; border-bottom: 1px solid #444; border-left: 1px solid #444; padding: 2px; position: fixed; right: 0px; top: 0px; width: 120px; z-index: 2 } It will only allow you to select one or the other. I want to merge them, making it: #admin { background: #c9d2dc; border-color: #ccc border-bottom: 1px solid #444; border-left: 1px solid #444; padding: 2px; position: fixed; right: 0px; top: 0px; width: 120px; z-index: 2 }

    Read the article

  • Apache returns 304, I want it to ignore anything from client and send the page

    - by Ayman
    I am using Apache HTTPD 2.2 on Windows. mod_expires is commented out. Most other stuff are not changed from the defaults. gzip is on. I made some changes to my .js files. My client gets one 304 response for one of the .js files and never gets the rest. How can I force Apache to sort of flush everything and send all new files to the client? The main html file includes these scripts in the head section of the main page: <script src="js/jquery-1.7.1.min.js" type="text/javascript"> </script> <script src="js/jquery-ui-1.8.17.custom.min.js" type="text/javascript"></script> <script src="js/trex.utils.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.core.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.codes.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.emv.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.b24xtokens.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.iso.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.span2.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.amex.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.abi.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.barclays.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.bnet.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.visa.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.atm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.apacs.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.pstm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.stm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.thales.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.fps-saf.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.fps-iso.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.app.js" type="text/javascript" charset="utf-8"></script> Apache access log has the following: [07/Jul/2013:16:50:40 +0300] "GET /trex/index.html HTTP/1.1" 200 2033 "-" [07/Jul/2013:16:50:40 +0300] "GET /trex/js/trex.fps-iso.js HTTP/1.1" 304 [08/Jul/2013:07:54:35 +0300] "GET /trex/index.html HTTP/1.1" 304 - "-" [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.iso.js HTTP/1.1" 200 12417 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.amex.js HTTP/1.1" 200 6683 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.fps-saf.js HTTP/1.1" 200 2925 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.fps-iso.js HTTP/1.1" 304 Chrome request headers are as below: THis file is ok, latest: Request URL:http://localhost/trex/js/trex.iso.js Request Method:GET Status Code:200 OK (from cache) THis file is ok, latest: Request URL:http://localhost/trex/js/trex.amex.js Request Method:GET Status Code:200 OK (from cache) This one is also ok: Request URL:http://localhost/trex/js/trex.fps-iso.js Request Method:GET Status Code:200 OK (from cache) The rest of the scrips all have 200 OK (from cache).

    Read the article

  • Lost website rankings as a result of using a 302 redirect instead of a 301

    - by george peris
    I have an online store and during the last 2 days I noticed a big change in its SERPS for some keywords. The site is indexed in local Google for 5 keywords, and the online store is based on Magento 1.7. 5 days ago, I set up an SSL certificate in the online store in order to get better ranking results. I setup the SSL and followed the instructions from the Magento to make permanent 301 redirects from the HTTP URLs to the HTTPS URLs. I replaced all the URLs of the online store with HTTPS. When I saw in the SERPS that the rankings for some keywords were going up and down, I checked some URLs to see if all was going well with 301 redirects and found that the redirects where 302 and not 301, which is a big bug caused by Magento. I solved the problem with the .htaccess, but still the rankings for the homepage have disappeared. I fetched the site again using Google Webmaster Tools and am waiting to see. Can you please advise if I did something wrong, and what else I could do?

    Read the article

  • Registrar with good security, DNS hosting, and DNSSEC and IPv6 resolvers?

    - by semenko
    I'm looking to move my domains away from GoDaddy, but I'm having a tough time finding anyone with comparable features at a (even remotely) similar price. I've looked at the usual suggestions (NameCheap, Gandi.net, etc.), but they all seem to lack many of the GoDaddy feature base. I'm looking for: DNSSEC IPv6 Resolvers (dig pdns01.domaincontrol.com AAAA; etc. ) SSL-Logins by default HTTP-only login cookies No stupid password restrictions Two-factor authentications No DNS record limits Rough DNS statistics (queries/day, etc.) Audit trails GoDaddy has all of these, except two-factor, for $3/month. See http://www.godaddy.com/domains/dns-hosting.aspx I can't seem to find any other registrar that supports even a few of these. Is there a registrar that offers comparable features? Or, barring that, a DNS hosting service that offers similar features? (AWS Route53 doesn't offer DNSSEC or IPv6)

    Read the article

  • dns hosting - url forwarding - hiding forwarded url?

    - by jeremycollins
    I have free dns hosting with the domain registrar and I'd like the dns hosted domain www.example.com to display contents of www.myotherlongdomain.com. I only have 301/302/iframe forwarding options, however I want to mask the redirected (longdomain) url. If I use frames, users can view the source and see the (longdomain) url the contents are coming from. How can I hide it so it always displays www.example.com? There is no cloaking/masking option with the registrar. Thanks.

    Read the article

  • Blog/CMS software with editing style like Stack Exchange

    - by Merlyn Morgan-Graham
    I have been updating a Wordpress blog lately and found the turnaround time for content creation and editing is much worse than for Stack Overflow posts. Part of this has to do with being original compositions rather than riffing off a question. But part of it is the software. I am looking for CMS/blog software that has an overall editing experience similar to Stack Overflow. The most important features I'm looking for: Inline editing (mostly) Real-time preview on the same page are all important features for speeding up data entry. Markdown support (with inline and block-level code support) Syntax hilighting The features I must maintain from my self-hosted Wordpress: Somewhat popular/supported software, with extensibility support Self hostable Will work with MySql Wordpress has plugins for all these, but they don't necessarily work together. For example I've found a few markdown-on-save plugins, but I doubt those have a chance of ever supporting inline editing or real time previews. Also the most popular syntax hilighting plugins don't support inline code blocks, and I doubt previews would work with other syntax hilighting methods. If I get a wiki/web page content creation system along with it, or somehow integrate this into GitHub (with all the features I requested) I'll accept those as side benefits :) Formed as a question: Are there any pieces of content creation software for making a blog that support an editing style like Stack Exchange and Stack Overflow? Or magic combinations of Wordpress plugins that offer the same?

    Read the article

  • Do print and bookmark links really work?

    - by Joseph Mastey
    It seems to be common on the web to provide users with some visual element on the page to either print or bookmark a page. This is all well and good (and probably doesn't hurt for the most part), but I question its effectiveness at causing the intended behavior. Is there any evidence to suggest that this causes an increase in bookmarking/printing behavior? Similarly, is there any evidence that users will use this method rather than the browser's default interface for the functions? I am really looking for user research with actual results, rather than anecdotes to answer this question. Thanks, Joseph Mastey

    Read the article

  • First steps into css - aligning data insite one DIV [on hold]

    - by Andrew
    I am trying to move away from tables, and start doing CSS. Here is my HTML code that I currently trying to place into a nice looking container. <div> <div> <h2>ID: 4000 | SSN#: 4545</h2> </div> <div> <img src="./images/tenant/unknown.png"> </div> <div> <h3>Names Used</h3> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> Will Smith<br> Bill Smmith<br> John Smith<br> </div> <div> <h3>Phones Used</h3> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> 123456789<br> </div> <div> <h3>Addresses Used</h3> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> 125 Main Evanston IL 60202<br> 465 Greenwood St. Schaumburg null 60108<br> </div> </div> I now understand now I create classes and assign classes to elements. I have no issues doing colors. But I am very confused with elements alignments. Could you suggest a nice way to pack it together with some CSS which I can analyze and take as a CSS starting learning point?

    Read the article

  • Soft 404 error on redirected outbound links

    - by Techlands
    I have a redirect script on my site which sends visitors to an affiliate site. However in the last month I've noticed that Google webmaster tools is reporting my outbound links as a 404 error. Here is the breakdown on how its setup: My outbound links are coded like this: <a href="/f/c123" rel="nofollow" target="_blank">Link Title</a> My redirect script will then perform a 302 redirect to the affiliate link Originally I had the affiliate links (CJ) directly in the HTML, however I noticed over time that this had some impact on my sites traffic. So I changed them to a redirect script and my traffic returned. This seemed to work with no issues for over 1 year but now I'm getting soft 404 errors in Google webmaster tools. I did try adding a rule in my robots.txt to block any links starting with /f/ but I'm not sure if this will help or Google will still report soft 404 errors. I am considering as possible options to change the a tag to a button tag and use an onclick event to load the link.

    Read the article

  • Homepage 301 Redirect to SSL Homepage

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have 1 site where we implemented a 301 redirect on the homepage from http to https. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our webmaster account I notice that we are not being provided with any webmaster information (search queries, backlinks) related to our homepage under SSL. I performed a Fetch Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried that the fact that Google Fetch is not getting the correct Title Tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the SiteMap to ensure that Google is correctly indexing all our pages and being able to flow from the https to the http without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • New Wordpress posts generate 404 error.

    - by Steve
    I had a working installation of WordPress, and I recently encountered an issue where when I tried to login to the back-end, the browser would redirect to the login URL of the previous domain WordPress was installed on. I fixed this by reinstalling WordPress, and I can now login to the backend, but any new posts I make, or old posts I have generate 404 errors. Additionally, if I try to navigate to any category page, I again receive a 404 error. I have looked at the wp_posts table of my database, and the GUID field each contains the correct domain name and URL structure. What should I be checking here? Site in question.

    Read the article

  • Google adwords API - credit card safety question

    - by user5650
    Google is asking me to fax credit card xerox in order to activate adwords API in MCC. 1) Are there alternatives to this - is there a 3rd party provider who will give me this service without me sending them the credit card info? 2) How secure is it to send my credit card fax via some online fax service? 3) Do you think they will reject the application if I hide my CVV number in the fax? Any other thoughts appreciated.

    Read the article

  • How do you find all the links to disavow for a Google reconsideration request? [duplicate]

    - by QF_Developer
    This question already has an answer here: How to identify spammy domains giving backlinks to my site (to submit in disavow links in WMT) 2 answers A few months ago I received the following notification on Google Webmaster for a website I look after. Unnatural links to your site—impacts links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. Learn more. The question here is, should we actively attempt to disavow these links given that the action is seemingly targeted to just a bunch of keywords? I've downloaded the inbound links sample from Google Webmaster and so far I've been through the disavow and reconsideration requests process 6 times, each taking 2-3 weeks only to be supplied just 2 more links that Google don't approve of. At this rate it will take me the rest of my natural life to cleanup all these spammy links! It seems disavowing is futile as they haven't implemented broad actions against the website as a whole and (from what I can gather) have already nullified the value of those offending links. Under the quoted statement above however is a reconsideration request button that seems to imply I should be actively doing something here? UPDATE 14th October -- I have since created a small .NET application that you can feed the CSV sample links file into from Google Webmaster. What this tool does is crawl all the links and looks for specific linking patterns as per some configurable match strings. I realised that many of the links that Google are taking issue with were created by a rogue SEO firm we hired several years ago. All the links are appended with 1 of 5 different descriptions. The application I built uses some regexes to isolate any link sources with these matching appendages and automatically builds the disavow txt file. In the end it had to come down to an algorithm as manually disavowing links on this scale would take weeks! I will post the app here once I've cleaned it up.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >