Search Results

Search found 8461 results on 339 pages for 'disavow links'.

Page 70/339 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • The Best Ways For Link Building For New Websites

    Creating a website can be a very interesting and fun activity but you also need to build links so that you can make your website more successful. Building the links helps to make the website more visible and to have more traffic flow. One of the best ways to do this is to create easy and relevant keywords which can be easily remembered so that you can get new readers frequently and you can be able to earn money from the affiliate companies. You should also submit your articles to web directories as this can help you to get hundreds of backlinks which can be very good for your website.

    Read the article

  • Duplicate pages indexed in Google

    - by Mert
    I did a small coding mistake and Google indexed my site incorrectly. This is the correct form: https://www.foo.com/urunler/171/TENGA-CUP-DOUBLE-HOLE But Google indexed my site like this: https://www.foo.com/urunler/171/cart.aspx First I fixed the problem and made a site map with only the correct link in it. Now I checked webmaster tools and I see this: Total indexed 513 Not selected 544 Blocked by robots 0 So I think this can be caused by double indexes, and it looks like the pages not selected makes the correct pages not indexed. I want to know how to fix the "https://www.foo.com/urunler/171/cart.aspx" links. Should I fix in code or should I connect to Google to re-index my site? If I should redirect wrong/duplicate links to correct ones, how should that be done?

    Read the article

  • EAIESB is pleased to announce the release of book “Oracle Service Bus (OSB) in 21 days: A hands on guide for OSB”.

    - by JuergenKress
    It is available for order and signed copies are available through our website. For more information, Visit the below links. http://www.eaiesb.com/OSBin21daysbook.html http://eaiesb.com/OSBin21days.html For more information, visit the below links. http://www.eaiesb.com/OSBin21daysbook.html & http://eaiesb.com/OSBin21days.html. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: OSB,EAIESB,OSB 21 days,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Is there a downside of running too many Symfony applications for 1 website?

    - by gentrobot
    Recently I got access to a Symfony 1.2 project which is for just 1 website, but with too many applications. In the past, I have developed websites but with not more than 2 or 3 applications. The cross-application links are achieved by passing the full URL to the 'href' attribute. Since the site is still working absolutely fine, my question is will having too many front controllers (approximately 25-30) hamper the performance of the website? Should I just try to create Cross Application Links or put an additional effort in combining similar applications (I guess almost all of the site's frontend part) into 1 application but different modules ?

    Read the article

  • Eleven Eleven Eleven Plus Two

    - by Larry Wake
    You probably already know that Oracle Solaris 11 11/11 was not in fact launched on 11/11/11.  We had our reasons, one of the primary ones being that would have collided with Veterans Day. But I'm going to venture a blog post today--even though it's again of course Veterans Day--to catch up on some news for Oracle Solaris 11's second anniversary (plus two days). Most recently, we had lots to talk about at Oracle OpenWorld -- Markus Flierl gives an excellent recap on his blog. Also, you can now download the various Solaris-related presentations that were given this year.  Find the list and links at: Focus on Oracle Solaris (http://bit.ly/OOW13-Solaris) If you follow the links above, you'll see there's lots to learn about how to get major benefits from Oracle Solaris 11 today, and you'll also find out about some of the new things we're busily at work on as well.  Onward to year three!

    Read the article

  • Making sub domain the new main domains for ssl

    - by Dean Legg
    What would be your best advise for changing your main domain to a sub domain? Are site used to be example.co.uk but has now changed to https://secure.example.co.uk/. Any example.co.uk url's re-direct to the new secure domain. Effectively the example.co.uk is now just there to redirect any links and is no longer part of the sites url structure. I have added a new domain to Google Webmaster https://secure.example.co.uk/ and added the site map. Waiting for it to be indexed. Is there anything else you would advise and will this take away a lot of the juice from all the links I developed for example.co.uk? Guessing this is not best practise as I have struggled to find any information online based on this subject.

    Read the article

  • What effect does using itemprop="significantLinks" on anchors have for SEO and what is the ideal use?

    - by hdavis84
    I'm practicing application of microdata via http://schema.org. Anyone who's browsed the documentation there knows that there's a lot of need for improvement for more clear understandings on use for each property. My question on this post is more about the "significantLinks" property and how it effects SEO for on page, in content anchored text. Does anyone have any more information regarding whether its good to use for link optimization? I understand what schema.org means that it's to be used on "non-navigational links" and those links should be relevant to the current page's meaning. But will using this property hurt SEO or make SEO better for each page?

    Read the article

  • Does Google pick up anchor text that is in nested elements?

    - by dangerDAN
    When Google looks at anchor text on a website, will it pickup the text if it is inside nested elements? So for example: <a href="http://www.google.com/">Visit Google</a> To: <a href="http://www.google.com/"> <div class="circle"> <span>Visit Google</span> </div> </a> The reason I ask is because I want to use css3 elements for certain links on my website, to style them as circles. But the anchor text needs to be picked up for these links, so I want to know wether or not the above is bad practice in this case.

    Read the article

  • cannot do apt-get update on 12.04, cannot connect to IP

    - by user171771
    i cannot update my ubuntu from terminal or software centre and the links work if i go to them manually. heres the terinal feed..i have to remove the links but they work... ........ Err security.ubuntu.com precise-security/main amd64 Packages Unable to connect to 172.25.10.3:80: ......... ....... ........ ....... W: Failed to fetch archive.linux.duke.edu/ubuntu/dists/precise/universe/binary-amd64/Packages Unable to connect to 172.25.10.3:80:

    Read the article

  • Does google see the output of document.write?

    - by merk
    I've got a site where people can list machinery for sale. Each item for sale has it's own dynamic page. On each of these pages we allow the person selling the item to have a link back to their own website. Some people only sell a handful of items and some people are selling dozens or hundreds of items. So in some cases we can have a 100 links back to their external site. Our SEO guy is saying this is bad (i'll open another question on that). So i was wondering if i take the links and spit them out using document.write, will that hide them from google and the other SE's ?

    Read the article

  • What is required to create local business rich-snippets complete with sitelinks AND breadcrumbs?

    - by Felix
    I have a local business directory site. I would like to markup my business listing 'profile' level pages for display as enhanced listings/rich-snippets complete with business names, addresses and phone numbers. I would also like to display site-links and path-based breadcrumbs to help users navigate site directory hierarchy (which is deep). Is there a limit to the amount of breadcrumbs a site can leave? Is there a separate limit on the number of breadcrumbs which Google/Bing will display in the SERP? What kind of markup language(s) would be needed to best position my site to show site-links AND breadcrumbs? For example: Find a business Browse by Location State City Zip or Find a business Choose Service Browse by location State City Thanks all!

    Read the article

  • quickest way to research a set of pages backlinks

    - by JeremyB
    I have a list of 300+ pages (they were chosen based on which pages rank for a keyword I'm interested in) and I want to compile a list all the (known) inbound links to those pages. What's the fastest way to do this? It seems like the only tools out there-- Yahoo Site Explorer, SEOMoz, Majestic, require you to either a) manually export each set of links by hand, or b) get data at the domain level (e.g. Majestic's clique hunter). Does anyone know of any efficient way to do this? I ask because I'm about to write a bunch of code and I don't want to waste my time if there's another tool that will work. I know SEOMoz and Majestic have API's but I'm wondering if there's a more user-friendly option.

    Read the article

  • customizing google custom search

    - by user19866
    I am using google custom search for my site, I want google to provide at most 2 searches from a site. I am not sure whether it is possible or not but it seems there is no such option on google CSE. For example my CSE has 10 listed sites, what I want is that whenever user makes a search then search results should show at most two links from a site. In general CSE is resulting multiple results from one site only and those comes on page 1, if it is at most 2 from a site then user has better chance to have links from other sites too.

    Read the article

  • Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URL to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - starts parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you would like to cover even more links, but I've decided to omit other characters because I can't remember a link that would start with some other character 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces was present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: ${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending} I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like for my links to be replaced with shortenings as well. So when user copies a very long links (for instance if they would copy a link from google maps that usually generates long links I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. Does the replace string support notations like that so I can stil use a singe Regex.Replace() call?

    Read the article

  • How To Edit XML File

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with perl. I can export the data in a xml file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the xml file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'somepath'; find(\&a, $dir_target); sub a { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; my $txt = 'somepath/log.txt'; my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); open F, ">>", $txt; print F $link_local."\n\n"; close F; } And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • Keyboard navigation for jQuery Tabs

    - by Binyamin
    How to make Keyboard navigation left/up/right/down (like for photo gallery) feature for jQury Tabs with History? Demo without Keyboard feature in http://dl.dropbox.com/u/6594481/tabs/index.html Needed functions: 1. on keyboardtop/down make select and CSS showactivenested ajax tabs from 1-st to last level 2. on keyboardleft/right changeback/forwardcontent ofactivenested ajax tabs tab 3. an extra option, makeactivenested ajax tab on 'cursor-on' on concrete nested ajax tabs level Read more detailed question with example pictures in http://stackoverflow.com/questions/2975003/jquery-tools-to-make-keyboard-and-cookies-feature-for-ajaxed-tabs-with-history /** * @license * jQuery Tools @VERSION Tabs- The basics of UI design. * * NO COPYRIGHTS OR LICENSES. DO WHAT YOU LIKE. * * http://flowplayer.org/tools/tabs/ * * Since: November 2008 * Date: @DATE */ (function($) { // static constructs $.tools = $.tools || {version: '@VERSION'}; $.tools.tabs = { conf: { tabs: 'a', current: 'current', onBeforeClick: null, onClick: null, effect: 'default', initialIndex: 0, event: 'click', rotate: false, // 1.2 history: false }, addEffect: function(name, fn) { effects[name] = fn; } }; var effects = { // simple "toggle" effect 'default': function(i, done) { this.getPanes().hide().eq(i).show(); done.call(); }, /* configuration: - fadeOutSpeed (positive value does "crossfading") - fadeInSpeed */ fade: function(i, done) { var conf = this.getConf(), speed = conf.fadeOutSpeed, panes = this.getPanes(); if (speed) { panes.fadeOut(speed); } else { panes.hide(); } panes.eq(i).fadeIn(conf.fadeInSpeed, done); }, // for basic accordions slide: function(i, done) { this.getPanes().slideUp(200); this.getPanes().eq(i).slideDown(400, done); }, /** * AJAX effect */ ajax: function(i, done) { this.getPanes().eq(0).load(this.getTabs().eq(i).attr("href"), done); } }; var w; /** * Horizontal accordion * * @deprecated will be replaced with a more robust implementation */ $.tools.tabs.addEffect("horizontal", function(i, done) { // store original width of a pane into memory if (!w) { w = this.getPanes().eq(0).width(); } // set current pane's width to zero this.getCurrentPane().animate({width: 0}, function() { $(this).hide(); }); // grow opened pane to it's original width this.getPanes().eq(i).animate({width: w}, function() { $(this).show(); done.call(); }); }); function Tabs(root, paneSelector, conf) { var self = this, trigger = root.add(this), tabs = root.find(conf.tabs), panes = paneSelector.jquery ? paneSelector : root.children(paneSelector), current; // make sure tabs and panes are found if (!tabs.length) { tabs = root.children(); } if (!panes.length) { panes = root.parent().find(paneSelector); } if (!panes.length) { panes = $(paneSelector); } // public methods $.extend(this, { click: function(i, e) { var tab = tabs.eq(i); if (typeof i == 'string' && i.replace("#", "")) { tab = tabs.filter("[href*=" + i.replace("#", "") + "]"); i = Math.max(tabs.index(tab), 0); } if (conf.rotate) { var last = tabs.length -1; if (i < 0) { return self.click(last, e); } if (i > last) { return self.click(0, e); } } if (!tab.length) { if (current >= 0) { return self; } i = conf.initialIndex; tab = tabs.eq(i); } // current tab is being clicked if (i === current) { return self; } // possibility to cancel click action e = e || $.Event(); e.type = "onBeforeClick"; trigger.trigger(e, [i]); if (e.isDefaultPrevented()) { return; } // call the effect effects[conf.effect].call(self, i, function() { // onClick callback e.type = "onClick"; trigger.trigger(e, [i]); }); // default behaviour current = i; tabs.removeClass(conf.current); tab.addClass(conf.current); return self; }, getConf: function() { return conf; }, getTabs: function() { return tabs; }, getPanes: function() { return panes; }, getCurrentPane: function() { return panes.eq(current); }, getCurrentTab: function() { return tabs.eq(current); }, getIndex: function() { return current; }, next: function() { return self.click(current + 1); }, prev: function() { return self.click(current - 1); } }); // callbacks $.each("onBeforeClick,onClick".split(","), function(i, name) { // configuration if ($.isFunction(conf[name])) { $(self).bind(name, conf[name]); } // API self[name] = function(fn) { $(self).bind(name, fn); return self; }; }); if (conf.history && $.fn.history) { $.tools.history.init(tabs); conf.event = 'history'; } // setup click actions for each tab tabs.each(function(i) { $(this).bind(conf.event, function(e) { self.click(i, e); return e.preventDefault(); }); }); // cross tab anchor link panes.find("a[href^=#]").click(function(e) { self.click($(this).attr("href"), e); }); // open initial tab if (location.hash) { self.click(location.hash); } else { if (conf.initialIndex === 0 || conf.initialIndex > 0) { self.click(conf.initialIndex); } } } // jQuery plugin implementation $.fn.tabs = function(paneSelector, conf) { // return existing instance var el = this.data("tabs"); if (el) { return el; } if ($.isFunction(conf)) { conf = {onBeforeClick: conf}; } // setup conf conf = $.extend({}, $.tools.tabs.conf, conf); this.each(function() { el = new Tabs($(this), paneSelector, conf); $(this).data("tabs", el); }); return conf.api ? el: this; }; }) (jQuery); /** * @license * jQuery Tools @VERSION History "Back button for AJAX apps" * * NO COPYRIGHTS OR LICENSES. DO WHAT YOU LIKE. * * http://flowplayer.org/tools/toolbox/history.html * * Since: Mar 2010 * Date: @DATE */ (function($) { var hash, iframe, links, inited; $.tools = $.tools || {version: '@VERSION'}; $.tools.history = { init: function(els) { if (inited) { return; } // IE if ($.browser.msie && $.browser.version < '8') { // create iframe that is constantly checked for hash changes if (!iframe) { iframe = $("<iframe/>").attr("src", "javascript:false;").hide().get(0); $("body").append(iframe); setInterval(function() { var idoc = iframe.contentWindow.document, h = idoc.location.hash; if (hash !== h) { $.event.trigger("hash", h); } }, 100); setIframeLocation(location.hash || '#'); } // other browsers scans for location.hash changes directly without iframe hack } else { setInterval(function() { var h = location.hash; if (h !== hash) { $.event.trigger("hash", h); } }, 100); } links = !links ? els : links.add(els); els.click(function(e) { var href = $(this).attr("href"); if (iframe) { setIframeLocation(href); } // handle non-anchor links if (href.slice(0, 1) != "#") { location.href = "#" + href; return e.preventDefault(); } }); inited = true; } }; function setIframeLocation(h) { if (h) { var doc = iframe.contentWindow.document; doc.open().close(); doc.location.hash = h; } } // global histroy change listener $(window).bind("hash", function(e, h) { if (h) { links.filter(function() { var href = $(this).attr("href"); return href == h || href == h.replace("#", ""); }).trigger("history", [h]); } else { links.eq(0).trigger("history", [h]); } hash = h; window.location.hash = hash; }); // jQuery plugin implementation $.fn.history = function(fn) { $.tools.history.init(this); // return jQuery return this.bind("history", fn); }; })(jQuery); $(function() { $("#list").tabs("#content > div", {effect: 'ajax', history: true}); });

    Read the article

  • centos yum problems

    - by Malachi Soord
    I am really new to using linux and have just formatted my centos 5.2 vps and am trying to install links by using the command yum install links. But the following error gets displayed: [root@inverses ~]# yum install links Loading "fastestmirror" plugin Loading mirror speeds from cached hostfile * lxlabsupdate: download.lxlabs.com * lxlabslxupdate: download.lxlabs.com * base: ftp.nluug.nl * updates: distrib-coffee.ipsl.jussieu.fr * addons: mirror.answerstolove.com * extras: distrib-coffee.ipsl.jussieu.fr http://ftp.nluug.nl/ftp/pub/os/Linux/distr/CentOS/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://distrib-coffee.ipsl.jussieu.fr/pub/linux/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://mirror.ukhost4u.com/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosh2.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://mirror.atrpms.net/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosf.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centoso3.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosk.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosv.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosk3.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again From what I gather after checking out some of the URLs to see if they exist or not is that they require a redirect from .../5.2/.. to just ../5/ Is this a common thing to have to change? and how could I change this? Here is my CentOS-Base.repo http://pastebin.com/m67c1a022

    Read the article

  • Why doesn't my symbolic link work?

    - by orokusaki
    I'm trying to better understand symbolic links... and not having very much luck. This is my actual shell output with username/host changed: username@host:~$ mkdir actual username@host:~$ mkdir proper username@host:~$ touch actual/file-1.txt username@host:~$ echo "file 1" > actual/file-1.txt username@host:~$ touch actual/file-2.txt username@host:~$ echo "file 2" > actual/file-2.txt username@host:~$ ln -s actual/file-1.txt actual/file-2.txt proper username@host:~$ # Now, try to use the files through their links username@host:~$ cat proper/file-1.txt cat: proper/file-1.txt: No such file or directory username@host:~$ cat proper/file-2.txt cat: proper/file-2.txt: No such file or directory username@host:~$ # Check that actual files do in fact exist username@host:~$ cat actual/file-1.txt file 1 username@host:~$ cat actual/file-2.txt file 2 username@host:~$ # Remove the links and go home :( username@host:~$ rm proper/file-1.txt username@host:~$ rm proper/file-2.txt I thought that a symbolic link was supposed to operate transparently, in the sense that you could operate on the file that it points to as if you were accessing the file directly (except of course in the case of rm where of course the link is simply removed).

    Read the article

  • File permissions question

    - by Matthew Robert Keable
    I just switched my site's server from Windows to Linux, and am finally able to control file permissions from my ftp. So, seeing that all permissions were 705 by default (and not wanting just anyone to have permission to execute), I went and changed everything to 744. Now, gif and jpg links don't work, pdf download links don't work, php links don't load, and mov files don't play. Conversely, all html files work perfectly. Setting things back doesn't seem to help. Even setting to 777 gets me nowhere. Any ideas on what might be going wrong? I've been googling file permissions all day (solved that problem with the Windows-Linux switch, which has bred a new problem), and I don't think anything I can find has escaped my attention. The site: absis-minas.com Go easy on a n00b. I took up learning php out of interest, and wound up delving into server management issues due to a very simple line of code not working the way it was supposed to. Thanks!

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >