Search Results

Search found 3750 results on 150 pages for 'joomla sef urls'.

Page 41/150 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Is there a free tool/package that can monitor web traffic and display URLS accessed? [closed]

    - by Anthony
    I couldn't find a similar question but then maybe I am searching for the wrong terms. A few years ago I used a router like device, I'm pretty sure it was a SonicWall, that did this on a clients site. Basically all traffic would be routed through this device and it allowed the manager/administrator to inspect web usage of the workers, determine how often certain resources were accessed and block them if necessary (much like content filter). It showed reports based on domain name reached etc. Facebbok.com, Bebo.com and so on. It also displayed the usual IP traffic information etc. it was a UTM also. I have tried Endian firewall, with it's NTOP install, but I don't think that will show URLs browsed. Maybe I just haven't found it in NTOP yet? I need this to troubleshoot connection and traffic issue at my home, with about twenty devices/users so didn't want to buy a dedicated solution and have spare hardware to use a community product.

    Read the article

  • Is there a method to export the URLs of the open tabs of a Firefox window?

    - by hekevintran
    If I have a Firefox window open that contains 10 tabs, is there a way in Firefox or by a plug-in to get the URLs of those 10 tabs as a text file or some other format? Right now if I want to do this I need to copy the URL of tab A, paste it somewhere, move to tab B, and repeat. I could also bookmark all the tabs into a folder and export that, but that seems like such a hassle. If there is no such method, could someone point me to some documents that describe the basics of writing a Firefox plug-in. I am willing to write this myself if there is no "standard" way.

    Read the article

  • Where to find URLs for sources.list for debian for running apt-get update?

    - by Boda Cydo
    Can anyone tell me where to find URLs to put in /etc/apt/sources.list for debian so that I could run apt-get update? I couldn't find the precise answer by searching Google. When I currently try running apt-get update I get: W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/contrib/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/non-free/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] I have no idea how to solve this. Here is how my current sources.list looks like: deb ftp://ftp.debian.org/debian lenny main contrib non-free deb-src ftp://ftp.debian.org/debian lenny main contrib non-free deb ftp://ftp.debian.org/debian lenny/updates main contrib non-free deb-src ftp://ftp.debian.org/debian lenny/updates main contrib non-free I'm running debian_version 5.0.8: # cat /etc/debian_version 5.0.8 Thanks!

    Read the article

  • How can I redirect URLs using the proxy module in Apache?

    - by LearningIT
    This seems like a super-basic question but I am having a hard time tracking down a straightforward solution, so appreciate any help and patience with me on this: I want to configure my Apache proxy server to redirect certain URLs so that, for example, a web browser HTTP request for www.olddomain.com gets passed to the proxy server which then routes the request to www.newdomain.com which sends a response to the proxy server which then passes it back to the web browser. Seems so simple, yet I don't see how to achieve this on Apache. I know Squid/Squirm offer this functionality so am guessing I am missing something really basic. I know I can use RewriteRule to dynamically modify the URL and pass it to the proxy server, but I effectively want to do the reverse, whereby the proxy server receives the original URL, applies the RewriteRule, and then forwards the HTTP request to the new URL. Hope that makes sense. Thanks in advance for any help.

    Read the article

  • Is there a way to redirect certain URLs to specific web browsers in Linux?

    - by jraxxo
    I'm using Chrome as my default browser in Ubuntu 12.10. I need to use Firefox for business purposes (certain websites pertaining to my work only work with Firefox). Is there a way to force Ubuntu to use Firefox for certain types of URLs (maybe as defined by a regular expression) while maintaining Chrome as my default browser for all my other tasks? Perhaps as a shell script running in the background? I'd like this to work system-wide, covering links from Chrome itself as well as PDFs/ODTs, etc. I have searched for solutions, but I couldn't find anything besides OpenWith, a Firefox extension which adds a button to open certain links in other browsers which would again require me to open Firefox beforehand, which does not help me at all. Does anyone have any ideas? Something like Choosy for Linux?

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Difference between two kinds of Bing URL Referers

    - by joshuahedlund
    Most of the referral URLS that I get from Bing have the following syntax: http://www.bing.com/search?q=keywords+keywords&[some other variables] However I just noticed that maybe 10-20% of them are coming in like this: http://www.bing.com/url?source=search&[some other variables]&url=http%3A%2F%2Fwww.example.com/user-landing-page-on-my-site&yrktarget=_top&q=keywords+keywords&[some other variables] The first syntax gives me the keywords the user typed in, but the second actually gives me the keywords the user typed in and their landing page on my site. I was originally unaware of this second kind altogether because I have a customized referral report that filters out URLs containing my domain. But now that I noticed them I want to know why they occur to see if I can get more to occur this way because the second syntax contains more valuable information. If I go to one of the first URLs, it gives me a typical Bing query page. The second URLs seem to just redirect me to the Bing home page. I'm not sure if it has to do with the kind of search being performed (I also get a few http://www.bing.com/shopping/search?q= referers) or some other metric. Does anyone know what causes some referral URLs from Bing to have the /search?q syntax and others to have the /url?source syntax? P.S. I have verified that I am getting both kinds of URLs from non-advertising clicks. P.P.S. I am not talking about data in Google Analytics or similar software but the raw $_SERVER['HTTP_REFERER'] value coming from the client's original request.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Which Joomla module can be used to show articles with thumbnail?

    - by KoolKabin
    hi guys, I am trying to implement ja_nickel joomla template in my site. here is the preview of ja_nickel template: http://demo.ijoomlahost.com/ja-nickel/ I want my latest news articles to be displayed in the place of top information block. I think in that template they are using a thumbnail image, title and content. In general articles we have only title and content. So which module can be used to perform that work and how? I want image, title and content in an article but don't know which provides it. OR can we just merge the title and content both of normal article and display the image of article like that?

    Read the article

  • how to remove repeated record's from results linq to sql

    - by Sadegh
    hi, i want to remove repeated record's from results but distinct don't do this for me! why??? var results = (from words in _Xplorium.Words join wordFiles in _Xplorium.WordFiles on words.WordId equals wordFiles.WordId join files in _Xplorium.Files on wordFiles.FileId equals files.FileId join urls in _Xplorium.Urls on files.UrlId equals urls.UrlId where files.Title.Contains(query) || files.Description.Contains(query) orderby wordFiles.Count descending select new SearchResultItem() { Title = files.Title, Url = urls.Address, Count = wordFiles.Count, CrawledOn = files.CrawledOn, Description = files.Description, Lenght = files.Lenght, UniqueKey = words.WordId + "-" + files.FileId + "-" + urls.UrlId }).Distinct();

    Read the article

  • How should I do a loop a nokogiri search in ruby?

    - by kim
    I have the following that I retreive the title of each url from an array that contains a list of urls. require 'rubygems' require 'nokogiri' require 'open-uri' @urls = ["http://google.com", "http://yahoo.com", "http://rubyonrails.org"] @found_titles = Array.new @found_titles[0] = Nokogiri::HTML(open("#{@urls[0]}")).search("title").inner_html #this can go on forever...but #@found_titles[1] = Nokogiri::HTML(open("#{@urls[1]}")).search("title").inner_html #@found_titles[2] = Nokogiri::HTML(open("#{@urls[2]}")).search("title").inner_html puts "#{@found_titles[0]}" How should i form a loop method for this so i can get the title even when the list in @url array gets longer.

    Read the article

  • IIS7.5 Outbound Rule for lower case URLs in <a href="...">

    - by Quog
    Hi, I know how to canonicalise the case of URLs on incoming request to IIS7.5, in fact, there's a built in rule template to start from. But how about outbound (without changing the code)? This is where I got to so far: <outboundRules> <rule name="Outbound lowercase" preCondition="IsHTML" enabled="true"> <match filterByTags="A" pattern="[A-Z]" ignoreCase="false" /> <action type="Rewrite" value="{ToLower:{R:0}}" /> </rule> <preConditions> <preCondition name="IsHTML"> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" /> </preCondition> </preConditions> </outboundRules> However, IIS barfs on the action with a 500 implying an invalid web.config, probably on the {ToLower:XXXX} which I stole from the MS-supplied inbound rule template. Anyone know how to do this? Anyone know where the options are fully documented (my GoogleNinja skills failed me: I found this but "Specifies value syntax for the rule. This element is available only for the Rewrite action type" is not really comprehensive). Thanks, Damian

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • How do I use a jQuery not selector to select relative URLs?

    - by Matt
    I'm working on a little jQuery script to add Google Analytics pageTracker onclick data to all relative URLs on my forum, allowing me to track clicks to external sites. I don't want to add the onclick to internal links on forum.sitename or sitename, and I don't want to add them to any hrefs marked # or that start with /. My script below works nicely, but for one minor problem! All of the forum's URLs are relative and don't start with /. I appear to have no way to change that, so need to modify the jQuery below to prevent it adding the onclick to links like as it currently does. What I want to do, is to write a .not() function like .not("[href!^=http") to prevent jQuery from adding the onclick to any hrefs which do not start with http. However, .not() appears not to support this. I'm new to jQuery and can't figure this out. Any pointers would be massively appreciated. $(document).ready(function(){ // Get URL from a href var URL = $("a").attr('href'); // Add pageTracker data for GA tracking $("a") .not("[href^=#]") .not("[href^=http://forum.sitename]") .not("[href^=http://www.sitename]") .attr("onclick","pageTracker._trackEvent('Outgoing_Links', 'Forum', " + URL + ");") ; }); Thanks!

    Read the article

  • How do I create a Spring 3 + Tiles 2 webapp using REST-ful URLs?

    - by Ichiro Furusato
    I'm having a heck of a time resolving URLs with Spring 3.0 MVC. I'm just building a HelloWorld to try out how to build a RESTful webapp in Spring, nothing theoretically complicated. All of the examples I've been able to find are based on configurations that pay attention to file extensions ("*.htm" or "*.do"), include an artificial directory name prefix ("/foo") or even prefix paths with a dot (ugly), all approaches that use some artificial regex pattern as a signal to the resolver. For a REST approach I want to avoid all that muck and use only the natural URL patterns of my application. I would assume (perhaps incorrectly) that in web.xml I'd set a url-pattern of "/*" and pass everything to the DispatcherServlet for resolution, then just rely on URL patterns in my controller. I can't reliably get my resolver(s) to catch the URL patterns, and in all my trials this results in a resource not found error, a stack overflow (loop), or some kind of opaque Spring 3 ServletException stack trace — one of my ongoing frustrations with Spring generally is that the error messages are not often very helpful. I want to work with a Tiles 2 resolver. I've located my *.jsp files in WEB-INF/views/ and have a single line index.jsp file at the application root redirecting to the index file set by my layout.xml (the Tiles 2 Configurer). I do all the normal Spring 3 high-level configuration: <mvc:annotation-driven /> <mvc:view-controller path="/" view-name="index"/> <context:component-scan base-package="com.acme.web.controller" /> ...followed by all sorts of combinations and configurations of UrlBasedViewResolver, InternalResourceViewResolver, UrlFilenameViewController, etc. with all manner of variantions in my Tiles 2 configuration file. Then in my controller I've trying to pick up my URL patterns. Problem is, I can't reliably even get the resolver(s) to catch the patterns to send to my controller. This has now stretched to multiple days with no real progress on something I thought would be very simple to implement. I'm perhaps trying to do too much at once, though I would think this should be a simple (almost a default) configuration. I'm just trying to create a simple HelloWorld-type application, I wouldn't expect this is rocket science. Rather than me post my own configurations (which have ranged all over the map), does anyone know of an online example that: shows a simple Spring 3 MVC + Tiles 2 web application that uses REST-ful URLs (i.e., avoiding forced URL patterns such as file extensions, added directory names or dots) and relies solely on Spring 3 code/annotations (i.e., nothing outside of Spring MVC itself) to accomplish this? Is there an easy way to do this? Thanks very much for any help.

    Read the article

  • How can I use wildcard urls with Code Igniter?

    - by tarnfeld
    I am building a system that has urls like app.com/login and app.com/dothing but it will also have app.com/xOldP. The web application is build using the latest version of Code Igniter and I was wondering if there was a way to add a check in, that if the url is a controller then do that and finish, but if not, do something else (which will check the url and if its not real 404, which I can do). I was wondering how annoying this would be :/ Some sample code or links would be great!

    Read the article

  • What rendering services are available to convert URLs to images?

    - by tangens
    I know some services that encode the description of an image inside on an URL. For example: yuml.me for drawing UML Diagrams: or www.codecogs.com for rendering LaTeX equations: I really like these services to use them inside my javadoc to illustrate the documentation. On stackoverflow.com it's a bit tricky to encode these URLs, see my request at meta.stackoverflow.com. Question Are there any other rendering services that are useful for documenting source code?

    Read the article

  • How to map non-REST URLS to REST ones?

    - by Krzysztof Luks
    I have a small rails app that has default scaffold generated routes eg. /stadia/1.xml. However I have to support legacy client app that can't construct such URLs correctly. What I need to do is to map URL in the form: /stadia?id=1?format=xml to /stadia/1.xml Or even something like: /myApp?model=<model_name>?id=<id>?format=xml to /<model_name/<id>.xml Is it possible to craft appropriate route in Rails?

    Read the article

  • Supporting twitter-like user page urls with apache/php?

    - by user246114
    Hi, I'm using php on apache with mysql. I want to let users enter a url into their browser to see a custom user page for themselves, just like twitter does. For example, they could enter urls like: www.mysite.com/johndoe www.mysite.com/janedoe and see that user's page. How could I do this with php and apache? I don't want to create a folder on disk for every user like above, I'd instead kind of like to catch the url and generate the page on the fly for them, Thanks

    Read the article

  • Insert random <script> for using $(document).ready(function()}); in Joomla

    - by Anriëtte Combrink
    Hi I have an article in which I use PHP code inside the text editor in Joomla, in the backend. I can see jQuery already called when the page loads, here is my code inside the Article edit textbox: <?php $username="XXX"; $password="XXX"; $database="XXX"; mysql_connect('localhost',$username,$password) or die(mysql_error()); mysql_select_db($database) or die("Unable to select database"); $result=mysql_query("SELECT * FROM birthdays ORDER BY name") or die(mysql_error()); echo "<table width='100%' cellspacing='10' cellpadding='0' border='0'>"; echo "<tr valign='top'><th align='left'></th><th align='left'>Name</th><th align='left'>Email</th><th align='left'>Day</th><th align='left'>Month</th></tr><tr><td>&nbsp;</td></tr>"; while ($row = mysql_fetch_array($result)) { echo "<tr>"; echo '<td valign="top"><a href="#" id="'.$row['id'].'" class="delete_birthday"><img src="administrator/components/com_media/images/remove.png" alt="Delete user" /></a><input type="hidden" name="id[]" value="'.$row['id'].'" /></td>'; echo "<td valign='top' style='border-bottom:1px dotted #333333; padding:2px;'>"; echo $row['name']; echo "</td>"; echo "<td valign='top' style='border-bottom:1px dotted #333333; padding:2px;'>"; echo $row['email']; echo "</td>"; echo "<td align='center' valign='top' style='border-bottom:1px dotted #333333; padding:2px;'>"; echo $row['birthday']; echo "</td>"; echo "<td align='center' valign='top' style='border-bottom:1px dotted #333333; padding:2px;'>"; echo $row['birthmonth']; echo "</td>"; echo "</tr>"; } echo "</table>"; ?> <script type="text/javascript"> $(document).ready(function() { alert("hello"); }); </script> At the moment, nothing alerts (just alerting for testing if jQuery gets recognised, I am obviously going to put in click handlers), so I assume the $(document).ready() never gets triggered. I can see the code in the source, but it just never gets called. Anybody have any advice? BTW. the SC jQuery plugin is installed already to prevent library conflicts.

    Read the article

  • Can zlib.crc32 or zlib.adler32 be safely used to mask primary keys in URLs?

    - by David Eyk
    In Django Design Patterns, the author recommends using zlib.crc32 to mask primary keys in URLs. After some quick testing, I noticed that crc32 produces negative integers about half the time, which seems undesirable for use in a URL. zlib.adler32 does not appear to produce negatives, but is described as "weaker" than CRC. Is this method (either CRC or Adler-32) safe for usage in a URL as an alternate to a primary key? (i.e. is it collision-safe?) Is the "weaker" Adler-32 a satisfactory alternative for this task? How the heck do you reverse this?! That is, how do you determine the original primary key from the checksum?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >