Search Results

Search found 8483 results on 340 pages for 'magnet links'.

Page 71/340 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • customizing google custom search

    - by user19866
    I am using google custom search for my site, I want google to provide at most 2 searches from a site. I am not sure whether it is possible or not but it seems there is no such option on google CSE. For example my CSE has 10 listed sites, what I want is that whenever user makes a search then search results should show at most two links from a site. In general CSE is resulting multiple results from one site only and those comes on page 1, if it is at most 2 from a site then user has better chance to have links from other sites too.

    Read the article

  • Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URL to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - starts parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you would like to cover even more links, but I've decided to omit other characters because I can't remember a link that would start with some other character 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces was present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: ${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending} I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like for my links to be replaced with shortenings as well. So when user copies a very long links (for instance if they would copy a link from google maps that usually generates long links I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. Does the replace string support notations like that so I can stil use a singe Regex.Replace() call?

    Read the article

  • How To Edit XML File

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with perl. I can export the data in a xml file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the xml file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'somepath'; find(\&a, $dir_target); sub a { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; my $txt = 'somepath/log.txt'; my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); open F, ">>", $txt; print F $link_local."\n\n"; close F; } And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • Keyboard navigation for jQuery Tabs

    - by Binyamin
    How to make Keyboard navigation left/up/right/down (like for photo gallery) feature for jQury Tabs with History? Demo without Keyboard feature in http://dl.dropbox.com/u/6594481/tabs/index.html Needed functions: 1. on keyboardtop/down make select and CSS showactivenested ajax tabs from 1-st to last level 2. on keyboardleft/right changeback/forwardcontent ofactivenested ajax tabs tab 3. an extra option, makeactivenested ajax tab on 'cursor-on' on concrete nested ajax tabs level Read more detailed question with example pictures in http://stackoverflow.com/questions/2975003/jquery-tools-to-make-keyboard-and-cookies-feature-for-ajaxed-tabs-with-history /** * @license * jQuery Tools @VERSION Tabs- The basics of UI design. * * NO COPYRIGHTS OR LICENSES. DO WHAT YOU LIKE. * * http://flowplayer.org/tools/tabs/ * * Since: November 2008 * Date: @DATE */ (function($) { // static constructs $.tools = $.tools || {version: '@VERSION'}; $.tools.tabs = { conf: { tabs: 'a', current: 'current', onBeforeClick: null, onClick: null, effect: 'default', initialIndex: 0, event: 'click', rotate: false, // 1.2 history: false }, addEffect: function(name, fn) { effects[name] = fn; } }; var effects = { // simple "toggle" effect 'default': function(i, done) { this.getPanes().hide().eq(i).show(); done.call(); }, /* configuration: - fadeOutSpeed (positive value does "crossfading") - fadeInSpeed */ fade: function(i, done) { var conf = this.getConf(), speed = conf.fadeOutSpeed, panes = this.getPanes(); if (speed) { panes.fadeOut(speed); } else { panes.hide(); } panes.eq(i).fadeIn(conf.fadeInSpeed, done); }, // for basic accordions slide: function(i, done) { this.getPanes().slideUp(200); this.getPanes().eq(i).slideDown(400, done); }, /** * AJAX effect */ ajax: function(i, done) { this.getPanes().eq(0).load(this.getTabs().eq(i).attr("href"), done); } }; var w; /** * Horizontal accordion * * @deprecated will be replaced with a more robust implementation */ $.tools.tabs.addEffect("horizontal", function(i, done) { // store original width of a pane into memory if (!w) { w = this.getPanes().eq(0).width(); } // set current pane's width to zero this.getCurrentPane().animate({width: 0}, function() { $(this).hide(); }); // grow opened pane to it's original width this.getPanes().eq(i).animate({width: w}, function() { $(this).show(); done.call(); }); }); function Tabs(root, paneSelector, conf) { var self = this, trigger = root.add(this), tabs = root.find(conf.tabs), panes = paneSelector.jquery ? paneSelector : root.children(paneSelector), current; // make sure tabs and panes are found if (!tabs.length) { tabs = root.children(); } if (!panes.length) { panes = root.parent().find(paneSelector); } if (!panes.length) { panes = $(paneSelector); } // public methods $.extend(this, { click: function(i, e) { var tab = tabs.eq(i); if (typeof i == 'string' && i.replace("#", "")) { tab = tabs.filter("[href*=" + i.replace("#", "") + "]"); i = Math.max(tabs.index(tab), 0); } if (conf.rotate) { var last = tabs.length -1; if (i < 0) { return self.click(last, e); } if (i > last) { return self.click(0, e); } } if (!tab.length) { if (current >= 0) { return self; } i = conf.initialIndex; tab = tabs.eq(i); } // current tab is being clicked if (i === current) { return self; } // possibility to cancel click action e = e || $.Event(); e.type = "onBeforeClick"; trigger.trigger(e, [i]); if (e.isDefaultPrevented()) { return; } // call the effect effects[conf.effect].call(self, i, function() { // onClick callback e.type = "onClick"; trigger.trigger(e, [i]); }); // default behaviour current = i; tabs.removeClass(conf.current); tab.addClass(conf.current); return self; }, getConf: function() { return conf; }, getTabs: function() { return tabs; }, getPanes: function() { return panes; }, getCurrentPane: function() { return panes.eq(current); }, getCurrentTab: function() { return tabs.eq(current); }, getIndex: function() { return current; }, next: function() { return self.click(current + 1); }, prev: function() { return self.click(current - 1); } }); // callbacks $.each("onBeforeClick,onClick".split(","), function(i, name) { // configuration if ($.isFunction(conf[name])) { $(self).bind(name, conf[name]); } // API self[name] = function(fn) { $(self).bind(name, fn); return self; }; }); if (conf.history && $.fn.history) { $.tools.history.init(tabs); conf.event = 'history'; } // setup click actions for each tab tabs.each(function(i) { $(this).bind(conf.event, function(e) { self.click(i, e); return e.preventDefault(); }); }); // cross tab anchor link panes.find("a[href^=#]").click(function(e) { self.click($(this).attr("href"), e); }); // open initial tab if (location.hash) { self.click(location.hash); } else { if (conf.initialIndex === 0 || conf.initialIndex > 0) { self.click(conf.initialIndex); } } } // jQuery plugin implementation $.fn.tabs = function(paneSelector, conf) { // return existing instance var el = this.data("tabs"); if (el) { return el; } if ($.isFunction(conf)) { conf = {onBeforeClick: conf}; } // setup conf conf = $.extend({}, $.tools.tabs.conf, conf); this.each(function() { el = new Tabs($(this), paneSelector, conf); $(this).data("tabs", el); }); return conf.api ? el: this; }; }) (jQuery); /** * @license * jQuery Tools @VERSION History "Back button for AJAX apps" * * NO COPYRIGHTS OR LICENSES. DO WHAT YOU LIKE. * * http://flowplayer.org/tools/toolbox/history.html * * Since: Mar 2010 * Date: @DATE */ (function($) { var hash, iframe, links, inited; $.tools = $.tools || {version: '@VERSION'}; $.tools.history = { init: function(els) { if (inited) { return; } // IE if ($.browser.msie && $.browser.version < '8') { // create iframe that is constantly checked for hash changes if (!iframe) { iframe = $("<iframe/>").attr("src", "javascript:false;").hide().get(0); $("body").append(iframe); setInterval(function() { var idoc = iframe.contentWindow.document, h = idoc.location.hash; if (hash !== h) { $.event.trigger("hash", h); } }, 100); setIframeLocation(location.hash || '#'); } // other browsers scans for location.hash changes directly without iframe hack } else { setInterval(function() { var h = location.hash; if (h !== hash) { $.event.trigger("hash", h); } }, 100); } links = !links ? els : links.add(els); els.click(function(e) { var href = $(this).attr("href"); if (iframe) { setIframeLocation(href); } // handle non-anchor links if (href.slice(0, 1) != "#") { location.href = "#" + href; return e.preventDefault(); } }); inited = true; } }; function setIframeLocation(h) { if (h) { var doc = iframe.contentWindow.document; doc.open().close(); doc.location.hash = h; } } // global histroy change listener $(window).bind("hash", function(e, h) { if (h) { links.filter(function() { var href = $(this).attr("href"); return href == h || href == h.replace("#", ""); }).trigger("history", [h]); } else { links.eq(0).trigger("history", [h]); } hash = h; window.location.hash = hash; }); // jQuery plugin implementation $.fn.history = function(fn) { $.tools.history.init(this); // return jQuery return this.bind("history", fn); }; })(jQuery); $(function() { $("#list").tabs("#content > div", {effect: 'ajax', history: true}); });

    Read the article

  • centos yum problems

    - by Malachi Soord
    I am really new to using linux and have just formatted my centos 5.2 vps and am trying to install links by using the command yum install links. But the following error gets displayed: [root@inverses ~]# yum install links Loading "fastestmirror" plugin Loading mirror speeds from cached hostfile * lxlabsupdate: download.lxlabs.com * lxlabslxupdate: download.lxlabs.com * base: ftp.nluug.nl * updates: distrib-coffee.ipsl.jussieu.fr * addons: mirror.answerstolove.com * extras: distrib-coffee.ipsl.jussieu.fr http://ftp.nluug.nl/ftp/pub/os/Linux/distr/CentOS/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://distrib-coffee.ipsl.jussieu.fr/pub/linux/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://mirror.ukhost4u.com/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosh2.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://mirror.atrpms.net/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosf.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centoso3.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosk.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosv.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosk3.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again From what I gather after checking out some of the URLs to see if they exist or not is that they require a redirect from .../5.2/.. to just ../5/ Is this a common thing to have to change? and how could I change this? Here is my CentOS-Base.repo http://pastebin.com/m67c1a022

    Read the article

  • Why doesn't my symbolic link work?

    - by orokusaki
    I'm trying to better understand symbolic links... and not having very much luck. This is my actual shell output with username/host changed: username@host:~$ mkdir actual username@host:~$ mkdir proper username@host:~$ touch actual/file-1.txt username@host:~$ echo "file 1" > actual/file-1.txt username@host:~$ touch actual/file-2.txt username@host:~$ echo "file 2" > actual/file-2.txt username@host:~$ ln -s actual/file-1.txt actual/file-2.txt proper username@host:~$ # Now, try to use the files through their links username@host:~$ cat proper/file-1.txt cat: proper/file-1.txt: No such file or directory username@host:~$ cat proper/file-2.txt cat: proper/file-2.txt: No such file or directory username@host:~$ # Check that actual files do in fact exist username@host:~$ cat actual/file-1.txt file 1 username@host:~$ cat actual/file-2.txt file 2 username@host:~$ # Remove the links and go home :( username@host:~$ rm proper/file-1.txt username@host:~$ rm proper/file-2.txt I thought that a symbolic link was supposed to operate transparently, in the sense that you could operate on the file that it points to as if you were accessing the file directly (except of course in the case of rm where of course the link is simply removed).

    Read the article

  • File permissions question

    - by Matthew Robert Keable
    I just switched my site's server from Windows to Linux, and am finally able to control file permissions from my ftp. So, seeing that all permissions were 705 by default (and not wanting just anyone to have permission to execute), I went and changed everything to 744. Now, gif and jpg links don't work, pdf download links don't work, php links don't load, and mov files don't play. Conversely, all html files work perfectly. Setting things back doesn't seem to help. Even setting to 777 gets me nowhere. Any ideas on what might be going wrong? I've been googling file permissions all day (solved that problem with the Windows-Linux switch, which has bred a new problem), and I don't think anything I can find has escaped my attention. The site: absis-minas.com Go easy on a n00b. I took up learning php out of interest, and wound up delving into server management issues due to a very simple line of code not working the way it was supposed to. Thanks!

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • Nginx rewrite for link shortener + Wordpress pretty URLs

    - by detusueno
    Okay so I installed Nginx/PHP/MySQL/Wordpress via a online walk through, and it had me enter these rewrites to enable Wordpress pretty URLs: if (-f $request_filename) { break; } if (-d $request_filename) { break; } rewrite ^(.+)$ /index.php?q=$1 last; error_page 404 = //index.php?q=$uri; This is then included in the vhost for my domain. What I'm trying to do now is add some redirection/link shortner rewrites that will play nice with the setup I have in mind. I'd like to redirect "x.com/y" to "x.com/script.php?id=y" for all external links that I post. The Wordpress link setup right now has almost all internal links begin with "news" (x.com/news/post-blah, x.com/news/category/1, etc) BUT I also have a few root links that point to some internal content (x.com/news, x.com/start). I'm guessing that's going to cause some conflicts. What's the best approach to do this? I've never worked with Nginx (or any rewrite rules) but maybe I can distinguish between "x.com/news" and "x.com/news/" to allow it to play nice? I had a friend setup a working version of this in Apache and it'd be nice if I could get this up on Nginx again.

    Read the article

  • Symbolic link modification for HP unix

    - by kalpesh
    Hi David Zaslavsky, Recently i was working on modifying the Symbolic links ... for a particular files.. while searching on internet i saw your post ... I am trying to use this script which you had posted .. find /home/user/public_html/qa/ -type l \ -lname '/home/user/public_html/dev/*' -printf \ 'ln -nsf $(readlink %p|sed s/dev/qa/) $(echo %p|sed s/dev/qa/)\n'\ script.sh SO i tried to modify your script for my problem .. in Hp unix env.. but it seems that the -lname command does not works for HP unix. do you know something equivalent that i can use ... Just to give you and idea of my problem ... I want to change all the symbolic links inside a particular folder .. New Symbolic link -- /base/testusr/scripts Old Symbolic link -- /base/produsr/scripts Now folder "A" contains more than 100 different files having soft links which points to this path -- /base/produsr/scripts But what I want is that the files inside folder A to point to this soft link --/base/testusr/scripts I am trying to achieve in Hp unix ... would really appreciate your help on this ...

    Read the article

  • Server side url scanner for malware, spyware , viruses and protect my visitors

    - by Vangel
    I have a forum/groups site that contains a lot of external URLs, sometimes direct download links. I want to protect my visitors from possible attacks from malware sites as they are mot likely to click on these links. CUrrently I implement DBL (spamhaus) but thats not enough. I want to run a background task to check the outgoing links first. I have looked at similar questions in StackOverflow (wrongly posted there) and here but fail to find a question same as mine or a good answer. People have suggested ClamAV , I don't believe it can detect Web hosted malware sites and its has a lot of missed detection. I have looked at google safe browsing service ( http://code.google.com/apis/safebrowsing/developers_guide_v2.html very complicated to implement or maintain plus midway I get lost :S ) I can go for commercial solution, anything to protect the visitors and my site brand. But I would like to hear the opinion of server admins and if anyone has implemented such a service. My Server is basic CentOS LAMP stack. thank you very much in advance.

    Read the article

  • How to set up a server without a hosting control panel

    - by A4J
    I have always used a control panel on my dedicated servers - from cPanel to Plesk to Virtualmin, and I am now considering ditching a CP altogether and manually editing config files. My requirements are fairly simple, I will host multiple sites on the server; some Apache with PHP & Mysql and some Passenger with Rails & Postgres. All will require email smtp/pop. FTP/Stats will not be required. Could someone please give me a quick run-down of what I would need to do - in terms of installing software and configuration? My server will come with a base install of CentOS 6.4 minimal. My thoughts so far: Install/update latest versions of MySQL & Postgres (are they 'safe' out of the box? Or do I need to do anything else like set up root passwords etc?) Install Apache & PHP (again, are the base installs good to go or do they require security tweaks?) Set up nameservers/hostnames/reverse DNS etc (Any guides on how to do this please?) Install Rubygems Install and configure Dovecot and Postfix (any tips on doing this? Or links to how-tos that cover it please?) Set up each website - any links to guides on how to do this? Install/configure firewall (or is the default install good to go?) Any other tips or advice would be greatly appreciated, as would links to guides or how-tos.

    Read the article

  • Convert Plain Text Hyperlinks into HTML Hyperlinks in PHP

    - by Volomike
    I have a simple commenting system here... http://affbuzz.com/comments/7299a55137def55917a5dc6c4fe0f261af8a4217 ...and people can submit hyperlinks inside the plain text field. When I display these records back from the database and into the web page, what RegExp in PHP can I use to convert these links into HTML-type anchor links? Bonus: For the algorithm to not do this with any other kind of link, just http and https.

    Read the article

  • 'on the web' drupal module showing images on localhost but not remote host

    - by Andrew Welch
    Hi, the simple little module titled 'on the web' for drupal shows social network links. It links to some images on the directory. on my localhost install they are showing but in the remote install the img tags aren't even appearing. I looked in the module's files and the path isn't hard-coded or anything so it's not to do with that. any ideas? the module is here: http://drupalmodules.com/module/on-the-web cheers andy

    Read the article

  • Have rTorrent to move .torrent files

    - by David Alvares
    I am running rTorrent 0.9.2 and have configured it to move completed torrents to a different folder with this configuration line: system.method.set_key = event.download.finished,move_complete,"d.set_directory=~/done/;execute=mv,-u,$d.get_base_path=,~/done/" This is working fine, but I would like it to also move the .torrent file that it creates (from a magnet link into the session directory) into this done directory with the same name as the torrent and a .torrent extension. I tried adding another cp command, but I can not seem to figure out which variable ($d.get_hash did not work) stores the torrent's hash (which is what the .torrent files are named in the session directory). Is there a way to do this with rTorrent, if so how?

    Read the article

  • RSS Detector in NSXMLParser

    - by Alexandre Cassagne
    How do I use NSXMLDetector to find RSS links in HTML files, the tags in the source are like so : <link rel="alternate" type="application/rss+xml" title="CNN - Top Stories [RSS]" href="http://rss.cnn.com/rss/cnn_topstories.rss"> <link rel="alternate" type="application/rss+xml" title="CNN - Recent Stories [RSS]" href="http://rss.cnn.com/rss/cnn_latest.rss"> I need this in order to automatically detect RSS links in a RSS app. Thanks !

    Read the article

  • Is there a CakePHP offline manual

    - by Leo
    There used to be, but there don't seem to be any direct links. A little digging around revealed some answers which I thought it would be useful to share. These are links to the manual in one page - useful for offline use or creating a PDF using Dardo Sordi Bogado's build script: http://rapidshare.com/files/218826372/manual-builder.zip 1.2 Manual in one page http://book.cakephp.org/complete/3/The-Manual 1.3 Manual in one page http://book.cakephp.org/complete/876/The-Manual Also see this thread: http://groups.google.com/group/cake-php/browse_thread/thread/5f45c1d0...

    Read the article

  • How to add page title in url in asp.net mvc? (url generation)

    - by Ante
    How to dynamically create urls/links like: www.restaurant.com/restaurant/restaurant-name-without-some-characters-like-space-coma-etc/132 what are the keywords i can use to google some articles on this topic? (how to genererate and handle this kind of urls inside asp.net mvc) There are some questions: How to generate links? (store slugs in db?) Redirect or not if slug isn't canonical? edit: apparently they are called slugs

    Read the article

  • How to use regular expression in lxml xpath?

    - by Arty
    I'm using construction like this: doc = parse(url).getroot() links = doc.xpath("//a[text()='some text']") But I need to select all links which have text beginning with "some text", so I'm wondering is there any way to use regexp here? Didn't find anything in lxml documentation

    Read the article

  • Sanitizing input for display in view when using simple_format

    - by Eric
    Hi, I'm trying to figure out the right way to display comments such that newlines and links are displayed. I know that usually, you should display user-inputs only when escaping html with h(). That of course won't display newlines or links, so I found the simple_format and auto_link methods. What I am now doing is: simple_format(santize(auto_link(comment.text))) Is this the right way to do this, and is it still safe from XSS attacks? Thanks! Eric

    Read the article

  • How to create custom javadoc tags

    - by Carlucho
    How to create custom javadoc tags such as @pre / @post... I found some links that explain it but i haven had luck with them, i dont know if that am already tired but i can figure where to put it. these are some of the links http://www.developer.com/java/other/article.php/3085991/Javadoc-Programming.html http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/javadoc.html I'm sorry to ask to be spoon fed but am at the stage where i only see black dots on the screen :\ Thanks a bunch

    Read the article

  • Will_paginate stuck on page 2

    - by Sleepycat
    For some reason my will_paginate collection is stuck on page 2. I have the usual links the view helper provides except every page after page one links to http://localhost:3000/ceo/gr_messages?page=2 I have tried to add the :order option with no luck. I have also ensured that the request is a get as mentioned on this page: http://wiki.github.com/mislav/will_paginate/simple-search Any other thoughts or suggestions would be appreciated.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >