Search Results

Search found 8370 results on 335 pages for 'seo friendly urls'.

Page 154/335 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Multiple urls to 1 website with a wild card ssl.

    - by dagda1
    Hi, At the moment, we have 27 single sites in IIS6, all with their own urls, all with the same subdomain, e.g. https://company1.mycompany.com https://company2.mycompany.com etc., etc. To further complicate things, there is 1 wild card certificate which deals with the subdomain *.mycompany.com and is assigned to each website. All these websites run under the same codebase. We want to consolidate all these websites into 1 website. Are there any issues with having a large number of host headers running under 1 IIS6 site or is there a better way of configuring the site? Thanks Paul

    Read the article

  • SharePoint 2007: Moving main site, to be a subsite - How can urls be redirected/changed?

    - by program247365
    The setup: SharePoint 2007 (MOSS Enterprise) on WINSVR03/IIS6 One site collection, with one access mapping (http://mainsite) currently I'm moving the main SharePoint site, in our one site collection, to be a subsite in a new site collection. I'm using SharePoint Content Deployment Wizard to complete this task (http://spdeploymentwizard.codeplex.com/). The Question So the main site http://mainsite being moved has many subsites, etc. I want to be sure that urls like this: http://mainsite/subsite/doclib/doc1.docx map to and redirect to the new url: http://newsite/mainsite/subsite/doclib/doc1.docx ? And furthermore: I'm aware of this - http://rdacollaboration.codeplex.com/releases/view/28073 , however is it IIS7 only? That'd wouldn't work for me. Looking at this question - http://serverfault.com/questions/107537/dealing-with-moved-documents-and-sites-in-sharepoint is the only one I see that is similar. Would an IIS redirect of http://mainsite to http://newsite/mainsite work only for the root url?

    Read the article

  • Clean URLS with mod rewrite and URL Encoded characters causes 404?

    - by Richard JP Le Guen
    I have a web site using mod_rewrite to get some clean urls and custom 404 pages. My .htaccess file looks like this: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?clean_url=$1 [QSA,L] </IfModule> What puzzles me is that if the URL contains a %2F (url-encoded /) the server seems to force a 404. As an example, http://example.com/category/article would be a normal article, but then http://example.com/category%2farticle gives a server-generated 404 page. (not the custom 404 page) I wouldn't have expected this... why this is happening? Is there a way around it?

    Read the article

  • Is there a free tool/package that can monitor web traffic and display URLS accessed? [closed]

    - by Anthony
    I couldn't find a similar question but then maybe I am searching for the wrong terms. A few years ago I used a router like device, I'm pretty sure it was a SonicWall, that did this on a clients site. Basically all traffic would be routed through this device and it allowed the manager/administrator to inspect web usage of the workers, determine how often certain resources were accessed and block them if necessary (much like content filter). It showed reports based on domain name reached etc. Facebbok.com, Bebo.com and so on. It also displayed the usual IP traffic information etc. it was a UTM also. I have tried Endian firewall, with it's NTOP install, but I don't think that will show URLs browsed. Maybe I just haven't found it in NTOP yet? I need this to troubleshoot connection and traffic issue at my home, with about twenty devices/users so didn't want to buy a dedicated solution and have spare hardware to use a community product.

    Read the article

  • Is there a method to export the URLs of the open tabs of a Firefox window?

    - by hekevintran
    If I have a Firefox window open that contains 10 tabs, is there a way in Firefox or by a plug-in to get the URLs of those 10 tabs as a text file or some other format? Right now if I want to do this I need to copy the URL of tab A, paste it somewhere, move to tab B, and repeat. I could also bookmark all the tabs into a folder and export that, but that seems like such a hassle. If there is no such method, could someone point me to some documents that describe the basics of writing a Firefox plug-in. I am willing to write this myself if there is no "standard" way.

    Read the article

  • Where to find URLs for sources.list for debian for running apt-get update?

    - by Boda Cydo
    Can anyone tell me where to find URLs to put in /etc/apt/sources.list for debian so that I could run apt-get update? I couldn't find the precise answer by searching Google. When I currently try running apt-get update I get: W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/contrib/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/non-free/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] I have no idea how to solve this. Here is how my current sources.list looks like: deb ftp://ftp.debian.org/debian lenny main contrib non-free deb-src ftp://ftp.debian.org/debian lenny main contrib non-free deb ftp://ftp.debian.org/debian lenny/updates main contrib non-free deb-src ftp://ftp.debian.org/debian lenny/updates main contrib non-free I'm running debian_version 5.0.8: # cat /etc/debian_version 5.0.8 Thanks!

    Read the article

  • How can I redirect URLs using the proxy module in Apache?

    - by LearningIT
    This seems like a super-basic question but I am having a hard time tracking down a straightforward solution, so appreciate any help and patience with me on this: I want to configure my Apache proxy server to redirect certain URLs so that, for example, a web browser HTTP request for www.olddomain.com gets passed to the proxy server which then routes the request to www.newdomain.com which sends a response to the proxy server which then passes it back to the web browser. Seems so simple, yet I don't see how to achieve this on Apache. I know Squid/Squirm offer this functionality so am guessing I am missing something really basic. I know I can use RewriteRule to dynamically modify the URL and pass it to the proxy server, but I effectively want to do the reverse, whereby the proxy server receives the original URL, applies the RewriteRule, and then forwards the HTTP request to the new URL. Hope that makes sense. Thanks in advance for any help.

    Read the article

  • Is there a way to redirect certain URLs to specific web browsers in Linux?

    - by jraxxo
    I'm using Chrome as my default browser in Ubuntu 12.10. I need to use Firefox for business purposes (certain websites pertaining to my work only work with Firefox). Is there a way to force Ubuntu to use Firefox for certain types of URLs (maybe as defined by a regular expression) while maintaining Chrome as my default browser for all my other tasks? Perhaps as a shell script running in the background? I'd like this to work system-wide, covering links from Chrome itself as well as PDFs/ODTs, etc. I have searched for solutions, but I couldn't find anything besides OpenWith, a Firefox extension which adds a button to open certain links in other browsers which would again require me to open Firefox beforehand, which does not help me at all. Does anyone have any ideas? Something like Choosy for Linux?

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Difference between two kinds of Bing URL Referers

    - by joshuahedlund
    Most of the referral URLS that I get from Bing have the following syntax: http://www.bing.com/search?q=keywords+keywords&[some other variables] However I just noticed that maybe 10-20% of them are coming in like this: http://www.bing.com/url?source=search&[some other variables]&url=http%3A%2F%2Fwww.example.com/user-landing-page-on-my-site&yrktarget=_top&q=keywords+keywords&[some other variables] The first syntax gives me the keywords the user typed in, but the second actually gives me the keywords the user typed in and their landing page on my site. I was originally unaware of this second kind altogether because I have a customized referral report that filters out URLs containing my domain. But now that I noticed them I want to know why they occur to see if I can get more to occur this way because the second syntax contains more valuable information. If I go to one of the first URLs, it gives me a typical Bing query page. The second URLs seem to just redirect me to the Bing home page. I'm not sure if it has to do with the kind of search being performed (I also get a few http://www.bing.com/shopping/search?q= referers) or some other metric. Does anyone know what causes some referral URLs from Bing to have the /search?q syntax and others to have the /url?source syntax? P.S. I have verified that I am getting both kinds of URLs from non-advertising clicks. P.P.S. I am not talking about data in Google Analytics or similar software but the raw $_SERVER['HTTP_REFERER'] value coming from the client's original request.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Do large number of internal broken links affect SEO?

    - by TheBigK
    We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - domain.com/correct-URL/344322/ - Throws 404 domain.com/correct-URL/433466/ - Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - Can large number of (we've over 99k of them) internal broken links cause rankings to drop? I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.

    Read the article

  • SEO Blog Indexing : Dot Wordpress Versus a Registered Domain?

    - by rumspringa00
    I've used Wordpress for a few of my client's sites, mostly small businesses and ecommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using Wordpress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot Wordpress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domian would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong? Thanks in advance!

    Read the article

  • What's the best way to make a mobile friendly site?

    - by Frew
    Speaking entirely in technology-free terms, what is the best way to make a mobile friendly site? That is, I want to make a site that will work on a regular computer but also have mobile versions of the pages. Should I rewrite each page? The pages will probably have different functionality, so should I rewrite the backend code? Should it be an effectively different site with the same database?

    Read the article

  • How can I read pcap files in a friendly format?

    - by Tony
    a simple cat on the pcap file looks terrible: $cat tcp_dump.pcap ?ò????YVJ? JJ ?@@.?E<??@@ ?CA??qe?U?????h? .Ceh?YVJ?? JJ ?@@.?E<??@@ CA??qe?U?????z? .ChV?YVJ$?JJ ?@@.?E<-/@@A?CA??9????F???A&? .Ck??YVJgeJJ@@.??#3E<@3{n??9CA??P???F???<K? ??`.Ck??YVJgeBB ?@@.?E4-0@@AFCA??9????F?P????? .Ck???`?YVJ?""@@.??#3E?L@3?I??9CA??P???F????? ???.Ck?220-rly-da03.mx etc. I tried to make it prettier with: sudo tcpdump -ttttnnr tcp_dump.pcap reading from file tcp_dump.pcap, link-type EN10MB (Ethernet) 2009-07-09 20:57:40.819734 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776168808 0,nop,wscale 5> 2009-07-09 20:57:43.819905 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776169558 0,nop,wscale 5> 2009-07-09 20:57:47.248100 IP 67.23.28.65.42385 > 205.188.159.57.25: S 2644526720:2644526720(0) win 5840 <mss 1460,sackOK,timestamp 776170415 0,nop,wscale 5> 2009-07-09 20:57:47.288103 IP 205.188.159.57.25 > 67.23.28.65.42385: S 1358829769:1358829769(0) ack 2644526721 win 5792 <mss 1460,sackOK,timestamp 4292123488 776170415,nop,wscale 2> 2009-07-09 20:57:47.288103 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 1 win 183 <nop,nop,timestamp 776170425 4292123488> 2009-07-09 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: P 1:481(480) ack 1 win 1448 <nop,nop,timestamp 4292123568 776170425> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: P 1:18(17) ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: . ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: P 481:536(55) ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 67.23.28.65.42385 > 205.188.159.57.25: P 18:44(26) ack 536 win 216 <nop,nop,timestamp 776170454 4292123606> 2009-07-09 20:57:47.444112 IP 205.188.159.57.25 > 67.23.28.65.42385: P 536:581(45) ack 44 win 1448 <nop,nop,timestamp 4292123644 776170454> 2009-07-09 20:57:47.484114 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 581 win 216 <nop,nop,timestamp 776170474 4292123644> 2009-07-09 20:57:47.616121 IP 67.23.28.65.42385 > 205.188.159.57.25: P 44:50(6) ack 581 win 216 <nop,nop,timestamp 776170507 4292123644> 2009-07-09 20:57:47.652123 IP 205.188.159.57.25 > 67.23.28.65.42385: P 581:589(8) ack 50 win 1448 <nop,nop,timestamp 4292123855 776170507> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: P 50:56(6) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: F 56:56(0) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.668124 IP 67.23.28.65.49239 > 216.239.113.101.25: S 2642380481:2642380481(0) win 5840 <mss 1460,sackOK,timestamp 776170520 0,nop,wscale 5> 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: P 589:618(29) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: F 618:618(0) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 Well...that is much prettier but it doesn't show the actual messages. I can actually extract more information just viewing the RAW file. What is the best ( and preferably easiest) way to just view all the contents of the pcap file? UPDATE Thanks to the responses below, I made some progress. Here is what it looks like now: tcpdump -qns 0 -A -r blah.pcap 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: tcp 480 0x0000: 4500 0214 834c 4000 3306 f649 cdbc 9f39 [email protected] 0x0010: 4317 1c41 0019 a591 50fe 18ca 9da0 4681 C..A....P.....F. 0x0020: 8018 05a8 848f 0000 0101 080a ffd4 9bb0 ................ 0x0030: 2e43 6bb9 3232 302d 726c 792d 6461 3033 .Ck.220-rly-da03 0x0040: 2e6d 782e 616f 6c2e 636f 6d20 4553 4d54 .mx.aol.com.ESMT 0x0050: 5020 6d61 696c 5f72 656c 6179 5f69 6e2d P.mail_relay_in- 0x0060: 6461 3033 2e34 3b20 5468 752c 2030 3920 da03.4;.Thu,.09. 0x0070: 4a75 6c20 3230 3039 2031 363a 3537 3a34 Jul.2009.16:57:4 0x0080: 3720 2d30 3430 300d 0a32 3230 2d41 6d65 7.-0400..220-Ame 0x0090: 7269 6361 204f 6e6c 696e 6520 2841 4f4c rica.Online.(AOL 0x00a0: 2920 616e 6420 6974 7320 6166 6669 6c69 ).and.its.affili 0x00b0: 6174 6564 2063 6f6d 7061 6e69 6573 2064 ated.companies.d etc. This looks good, but it still makes the actual message on the right difficult to read. Is there a way to view those messages in a more friendly way? UPDATE This made it pretty: tcpick -C -yP -r tcp_dump.pcap Thanks!

    Read the article

  • how to remove repeated record's from results linq to sql

    - by Sadegh
    hi, i want to remove repeated record's from results but distinct don't do this for me! why??? var results = (from words in _Xplorium.Words join wordFiles in _Xplorium.WordFiles on words.WordId equals wordFiles.WordId join files in _Xplorium.Files on wordFiles.FileId equals files.FileId join urls in _Xplorium.Urls on files.UrlId equals urls.UrlId where files.Title.Contains(query) || files.Description.Contains(query) orderby wordFiles.Count descending select new SearchResultItem() { Title = files.Title, Url = urls.Address, Count = wordFiles.Count, CrawledOn = files.CrawledOn, Description = files.Description, Lenght = files.Lenght, UniqueKey = words.WordId + "-" + files.FileId + "-" + urls.UrlId }).Distinct();

    Read the article

  • How should I do a loop a nokogiri search in ruby?

    - by kim
    I have the following that I retreive the title of each url from an array that contains a list of urls. require 'rubygems' require 'nokogiri' require 'open-uri' @urls = ["http://google.com", "http://yahoo.com", "http://rubyonrails.org"] @found_titles = Array.new @found_titles[0] = Nokogiri::HTML(open("#{@urls[0]}")).search("title").inner_html #this can go on forever...but #@found_titles[1] = Nokogiri::HTML(open("#{@urls[1]}")).search("title").inner_html #@found_titles[2] = Nokogiri::HTML(open("#{@urls[2]}")).search("title").inner_html puts "#{@found_titles[0]}" How should i form a loop method for this so i can get the title even when the list in @url array gets longer.

    Read the article

  • IIS7.5 Outbound Rule for lower case URLs in <a href="...">

    - by Quog
    Hi, I know how to canonicalise the case of URLs on incoming request to IIS7.5, in fact, there's a built in rule template to start from. But how about outbound (without changing the code)? This is where I got to so far: <outboundRules> <rule name="Outbound lowercase" preCondition="IsHTML" enabled="true"> <match filterByTags="A" pattern="[A-Z]" ignoreCase="false" /> <action type="Rewrite" value="{ToLower:{R:0}}" /> </rule> <preConditions> <preCondition name="IsHTML"> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" /> </preCondition> </preConditions> </outboundRules> However, IIS barfs on the action with a 500 implying an invalid web.config, probably on the {ToLower:XXXX} which I stole from the MS-supplied inbound rule template. Anyone know how to do this? Anyone know where the options are fully documented (my GoogleNinja skills failed me: I found this but "Specifies value syntax for the rule. This element is available only for the Rewrite action type" is not really comprehensive). Thanks, Damian

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • How do I use a jQuery not selector to select relative URLs?

    - by Matt
    I'm working on a little jQuery script to add Google Analytics pageTracker onclick data to all relative URLs on my forum, allowing me to track clicks to external sites. I don't want to add the onclick to internal links on forum.sitename or sitename, and I don't want to add them to any hrefs marked # or that start with /. My script below works nicely, but for one minor problem! All of the forum's URLs are relative and don't start with /. I appear to have no way to change that, so need to modify the jQuery below to prevent it adding the onclick to links like as it currently does. What I want to do, is to write a .not() function like .not("[href!^=http") to prevent jQuery from adding the onclick to any hrefs which do not start with http. However, .not() appears not to support this. I'm new to jQuery and can't figure this out. Any pointers would be massively appreciated. $(document).ready(function(){ // Get URL from a href var URL = $("a").attr('href'); // Add pageTracker data for GA tracking $("a") .not("[href^=#]") .not("[href^=http://forum.sitename]") .not("[href^=http://www.sitename]") .attr("onclick","pageTracker._trackEvent('Outgoing_Links', 'Forum', " + URL + ");") ; }); Thanks!

    Read the article

  • Is using value converters to generate GUI-friendly strings a misuse of value converters?

    - by tempy
    Currently, I use value converters to generate user-friendly strings for the GUI. As an example, I have a window that displays the number of available entities in the status bar. The Viewmodel simply has an int dependency property that the calling code can set, and then on the binding for the textbox that displays the number of entities, I specify the int dependency property and a value converter that changes "x" into "x entities available". My code is starting to become littered with these converters, and I have a large number of annoying resource declarations in my XAML, and yet I like them because all the GUI-specific string formatting is being isolated in the converters and the calling code doesn't have to worry about it. But still, I wonder if this is not the purpose that value converters were made for.

    Read the article

  • How can I use gnuplot to plot data that is in an un-friendly format?

    - by Zack
    I need to graph some data that is not exactly in the most friendly format, most examples or usage puts things in a nice column/table that is very easy to parse and graph. However I have the following format (and am a bit stuck as to how to tackle this): DATE1 label_1 xx yy DATE1 label_2 xx yy DATE1 label_3 xx yy DATE2 label_2 xx yy DATE2 label_3 xx yy DATE3 label_1 xx yy DATE3 label_2 xx yy DATE3 label_3 xx yy DATE4 label_2 xx yy DATE4 label_3 xx yy ...continues *I've added the extra space between the dates for readability. **Note: under DATE2,DATE4 label_1 is missing, i.e. the data file may have labels that come and go and should represent a discontinuity in the graph. I'd like to have the X-axis use the DATEX for the labels, and then create two lines for each label (xx and yy respectively). Does anyone have any suggestions on the best way to tackle this problem?

    Read the article

  • 3 fixed Columns (header and footer) using DIVs, NO Absolute DIVs, IE friendly, ALL columns stretch e

    - by Phillip Schein
    Left to right, Col1 id 560px wide with 10 px padding, middle column, 250px wide with 5px padding and Col3 (siderbar) is 200px wide with 3px padding. Background coloR, no matter text length in any column should stretch vertically equal. No javascript (jQuery workarounds) to make it work. It needs to be pure Semantic Markup with CSS. Each Column should have a nested column of color were content will go. Column 1 should be SEO prominant which means the highest nested column for Google and other Search Engines to crawl. I have used 'The Holy Grail" layout, articles at "A List Apart" and these solution are so convoluted that they push the main columns left and than the nested columns push them with padding back right. This is crazy! I try to adjust these examples, but they're not editable by just adjusting a width in the CSS or the padding, etc. Can you please help me?

    Read the article

  • Search-friendly way to store checkbox values in MySQL?

    - by Alex
    What is a search-friendly way to store checkbox values in the database? Currently, checkboxes are processed as an array and values are separated by a ";" As such: <input type="checkbox" name="frequency[]" value="Daily"/> Daily <input type="checkbox" name="frequency[]" value="Weekly"/> Weekly <input type="checkbox" name="frequency[]" value="Monthly"/> Monthly The PHP backend runs implode(';', $frequency) and adds the string to the database. This works fine but it's a nightmare when it comes to searching. Is there a better way to approach this?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >