Search Results

Search found 5530 results on 222 pages for 'nested urls'.

Page 57/222 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Multiple urls to 1 website with a wild card ssl.

    - by dagda1
    Hi, At the moment, we have 27 single sites in IIS6, all with their own urls, all with the same subdomain, e.g. https://company1.mycompany.com https://company2.mycompany.com etc., etc. To further complicate things, there is 1 wild card certificate which deals with the subdomain *.mycompany.com and is assigned to each website. All these websites run under the same codebase. We want to consolidate all these websites into 1 website. Are there any issues with having a large number of host headers running under 1 IIS6 site or is there a better way of configuring the site? Thanks Paul

    Read the article

  • SharePoint 2007: Moving main site, to be a subsite - How can urls be redirected/changed?

    - by program247365
    The setup: SharePoint 2007 (MOSS Enterprise) on WINSVR03/IIS6 One site collection, with one access mapping (http://mainsite) currently I'm moving the main SharePoint site, in our one site collection, to be a subsite in a new site collection. I'm using SharePoint Content Deployment Wizard to complete this task (http://spdeploymentwizard.codeplex.com/). The Question So the main site http://mainsite being moved has many subsites, etc. I want to be sure that urls like this: http://mainsite/subsite/doclib/doc1.docx map to and redirect to the new url: http://newsite/mainsite/subsite/doclib/doc1.docx ? And furthermore: I'm aware of this - http://rdacollaboration.codeplex.com/releases/view/28073 , however is it IIS7 only? That'd wouldn't work for me. Looking at this question - http://serverfault.com/questions/107537/dealing-with-moved-documents-and-sites-in-sharepoint is the only one I see that is similar. Would an IIS redirect of http://mainsite to http://newsite/mainsite work only for the root url?

    Read the article

  • Clean URLS with mod rewrite and URL Encoded characters causes 404?

    - by Richard JP Le Guen
    I have a web site using mod_rewrite to get some clean urls and custom 404 pages. My .htaccess file looks like this: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?clean_url=$1 [QSA,L] </IfModule> What puzzles me is that if the URL contains a %2F (url-encoded /) the server seems to force a 404. As an example, http://example.com/category/article would be a normal article, but then http://example.com/category%2farticle gives a server-generated 404 page. (not the custom 404 page) I wouldn't have expected this... why this is happening? Is there a way around it?

    Read the article

  • Is there a free tool/package that can monitor web traffic and display URLS accessed? [closed]

    - by Anthony
    I couldn't find a similar question but then maybe I am searching for the wrong terms. A few years ago I used a router like device, I'm pretty sure it was a SonicWall, that did this on a clients site. Basically all traffic would be routed through this device and it allowed the manager/administrator to inspect web usage of the workers, determine how often certain resources were accessed and block them if necessary (much like content filter). It showed reports based on domain name reached etc. Facebbok.com, Bebo.com and so on. It also displayed the usual IP traffic information etc. it was a UTM also. I have tried Endian firewall, with it's NTOP install, but I don't think that will show URLs browsed. Maybe I just haven't found it in NTOP yet? I need this to troubleshoot connection and traffic issue at my home, with about twenty devices/users so didn't want to buy a dedicated solution and have spare hardware to use a community product.

    Read the article

  • Is there a method to export the URLs of the open tabs of a Firefox window?

    - by hekevintran
    If I have a Firefox window open that contains 10 tabs, is there a way in Firefox or by a plug-in to get the URLs of those 10 tabs as a text file or some other format? Right now if I want to do this I need to copy the URL of tab A, paste it somewhere, move to tab B, and repeat. I could also bookmark all the tabs into a folder and export that, but that seems like such a hassle. If there is no such method, could someone point me to some documents that describe the basics of writing a Firefox plug-in. I am willing to write this myself if there is no "standard" way.

    Read the article

  • Where to find URLs for sources.list for debian for running apt-get update?

    - by Boda Cydo
    Can anyone tell me where to find URLs to put in /etc/apt/sources.list for debian so that I could run apt-get update? I couldn't find the precise answer by searching Google. When I currently try running apt-get update I get: W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/contrib/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/non-free/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] I have no idea how to solve this. Here is how my current sources.list looks like: deb ftp://ftp.debian.org/debian lenny main contrib non-free deb-src ftp://ftp.debian.org/debian lenny main contrib non-free deb ftp://ftp.debian.org/debian lenny/updates main contrib non-free deb-src ftp://ftp.debian.org/debian lenny/updates main contrib non-free I'm running debian_version 5.0.8: # cat /etc/debian_version 5.0.8 Thanks!

    Read the article

  • How can I redirect URLs using the proxy module in Apache?

    - by LearningIT
    This seems like a super-basic question but I am having a hard time tracking down a straightforward solution, so appreciate any help and patience with me on this: I want to configure my Apache proxy server to redirect certain URLs so that, for example, a web browser HTTP request for www.olddomain.com gets passed to the proxy server which then routes the request to www.newdomain.com which sends a response to the proxy server which then passes it back to the web browser. Seems so simple, yet I don't see how to achieve this on Apache. I know Squid/Squirm offer this functionality so am guessing I am missing something really basic. I know I can use RewriteRule to dynamically modify the URL and pass it to the proxy server, but I effectively want to do the reverse, whereby the proxy server receives the original URL, applies the RewriteRule, and then forwards the HTTP request to the new URL. Hope that makes sense. Thanks in advance for any help.

    Read the article

  • Is there a way to redirect certain URLs to specific web browsers in Linux?

    - by jraxxo
    I'm using Chrome as my default browser in Ubuntu 12.10. I need to use Firefox for business purposes (certain websites pertaining to my work only work with Firefox). Is there a way to force Ubuntu to use Firefox for certain types of URLs (maybe as defined by a regular expression) while maintaining Chrome as my default browser for all my other tasks? Perhaps as a shell script running in the background? I'd like this to work system-wide, covering links from Chrome itself as well as PDFs/ODTs, etc. I have searched for solutions, but I couldn't find anything besides OpenWith, a Firefox extension which adds a button to open certain links in other browsers which would again require me to open Firefox beforehand, which does not help me at all. Does anyone have any ideas? Something like Choosy for Linux?

    Read the article

  • Elements of website don't work in IE

    - by mjcuva
    On the site I'm working on for my high school basketball team, certain elements don't work in Internet Explorer. The site is hermantownbasketball.com. The boys basketball sidebar should have nested drop down menus, one when you mouse over the team, such as "High School" and then another when you mouse over the grade under the team, such as 9th grade. This works perfectly fine in chrome, however, I can't get it to work in any version of Internet Explorer. Below is the part of the html and the corresponding css I am using. Unfortunately, I don't know enough css to know which part of my code IE doesn't like or how to fix it. Any help is greatly appreciated! HTML <span class = "boyItem"> <h3>High School</h3> <li class="group"> <h4>9th Grade</h4> <div class = "nested">Schedule</div> <div class = "nested">Events</div> <div class ="nested">Forms</div> <div class ="nested">Calendar</div> </li> <li class="group"> <h4>JV/Varsity</h4> <div class = "nested">Schedule</div> <div class = "nested">Events</div> <div class = "nested">Forms</div> <div class = "nested">Calendar</div> </li> </span> /* Creates the box around the title for each boy section. */ .boyItem h3 { background:#1C23E8; color:#EFFA20; padding-right:2px; padding:10px; font-size:18px; margin-left:-30px; margin-top:-10px; } ###CSS .boyItem h3:hover { background:#2A8FF5; } /* Prevents the boy sub-sections from being visable */ .boyItem li h4 { position: absolute; left:-9999px; font-size:15px; list-style-type:none;} /* Shows the boy sub-sections when user mouses over the section title. */ .boyItem:hover li h4 { position:relative; left:10px; background:#1C23E8; color:#EFFA20; padding-left:20px; padding:5px; } .boyItem:hover li h4:hover { background:#2A8FF5;} .nested { position:absolute; left:-9999px; background:#352EFF; color:#EFFA20; padding-right:2px; padding:4px; font-size:14px; margin:2px; margin-left:30px; margin-top:0px; margin-right:0px; margin-bottom:-2px;} .group:hover .nested {position:relative; left:0px; } .group:hover .nested:hover { background:#2A8FF5}

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Difference between two kinds of Bing URL Referers

    - by joshuahedlund
    Most of the referral URLS that I get from Bing have the following syntax: http://www.bing.com/search?q=keywords+keywords&[some other variables] However I just noticed that maybe 10-20% of them are coming in like this: http://www.bing.com/url?source=search&[some other variables]&url=http%3A%2F%2Fwww.example.com/user-landing-page-on-my-site&yrktarget=_top&q=keywords+keywords&[some other variables] The first syntax gives me the keywords the user typed in, but the second actually gives me the keywords the user typed in and their landing page on my site. I was originally unaware of this second kind altogether because I have a customized referral report that filters out URLs containing my domain. But now that I noticed them I want to know why they occur to see if I can get more to occur this way because the second syntax contains more valuable information. If I go to one of the first URLs, it gives me a typical Bing query page. The second URLs seem to just redirect me to the Bing home page. I'm not sure if it has to do with the kind of search being performed (I also get a few http://www.bing.com/shopping/search?q= referers) or some other metric. Does anyone know what causes some referral URLs from Bing to have the /search?q syntax and others to have the /url?source syntax? P.S. I have verified that I am getting both kinds of URLs from non-advertising clicks. P.P.S. I am not talking about data in Google Analytics or similar software but the raw $_SERVER['HTTP_REFERER'] value coming from the client's original request.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Ubuntu 12.04 slow boot on ASUS, attached with dmesg and bootchart

    - by stanleyhunk
    I heard that Ubuntu can boot up around 30sec, but I take more than 60sec every time my Ubuntu boot. I also read some forum said need to post the dmesg and bootchart to identify which process slowing down the booting time, as I'm not expert in Ubuntu and wish to learn more about it, I humbly ask any pro here to teach me how. My laptop specs: Model : ASUS K45VS RAM : 8MB CPU : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz x 8 Graphic Card : nVidia GeForce GT 645M HDD : 750GB OS : Single boot Ubuntu 12.04LTS System.uname : Linux 3.8.0-39-generic #58~precise1-Ubuntu SMP Fri May 2 21:33:40 UTC 2014 x86_64 System.release : Ubuntu 12.04.4 LTS System.kernel.options : BOOT_IMAGE=/boot/vmlinuz-3.8.0-39-generic root=UUID=c8a71503-bce8-406c-9a5f-5aa8284f5c7c ro quiet splash My dmesg (which highlighted to the huge time frame gap): [ 30.772656] cgroup: libvirtd (1961) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772659] cgroup: "memory" requires setting use_hierarchy to 1 on the root. [ 30.772683] cgroup: libvirtd (1961) created nested cgroup for controller "devices" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772710] cgroup: libvirtd (1961) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 32.140335] nvidia 0000:01:00.0: irq 46 for MSI/MSI-X [ 32.505619] ACPI Error: Field [TMPB] at 1081344 exceeds Buffer [ROM1] size 262144 (bits) (20121018/dsopcode-236) [ 32.505624] ACPI Error: Method parse/execution failed [\_SB_.PCI0.PEG0.PEGP._ROM] (Node ffff880224e91f00), AE_AML_BUFFER_LIMIT (20121018/psparse-537) [ 802.034422] audit_printk_skb: 69 callbacks suppressed [ 802.034428] type=1400 audit(1400914804.392:35): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" [ 1581.300901] type=1400 audit(1400915584.816:36): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" My Bootchart.png: Looking forward to learn to improve both my booting time and knowledge. Thanks in advance :)

    Read the article

  • how to remove repeated record's from results linq to sql

    - by Sadegh
    hi, i want to remove repeated record's from results but distinct don't do this for me! why??? var results = (from words in _Xplorium.Words join wordFiles in _Xplorium.WordFiles on words.WordId equals wordFiles.WordId join files in _Xplorium.Files on wordFiles.FileId equals files.FileId join urls in _Xplorium.Urls on files.UrlId equals urls.UrlId where files.Title.Contains(query) || files.Description.Contains(query) orderby wordFiles.Count descending select new SearchResultItem() { Title = files.Title, Url = urls.Address, Count = wordFiles.Count, CrawledOn = files.CrawledOn, Description = files.Description, Lenght = files.Lenght, UniqueKey = words.WordId + "-" + files.FileId + "-" + urls.UrlId }).Distinct();

    Read the article

  • How should I do a loop a nokogiri search in ruby?

    - by kim
    I have the following that I retreive the title of each url from an array that contains a list of urls. require 'rubygems' require 'nokogiri' require 'open-uri' @urls = ["http://google.com", "http://yahoo.com", "http://rubyonrails.org"] @found_titles = Array.new @found_titles[0] = Nokogiri::HTML(open("#{@urls[0]}")).search("title").inner_html #this can go on forever...but #@found_titles[1] = Nokogiri::HTML(open("#{@urls[1]}")).search("title").inner_html #@found_titles[2] = Nokogiri::HTML(open("#{@urls[2]}")).search("title").inner_html puts "#{@found_titles[0]}" How should i form a loop method for this so i can get the title even when the list in @url array gets longer.

    Read the article

  • IIS7.5 Outbound Rule for lower case URLs in <a href="...">

    - by Quog
    Hi, I know how to canonicalise the case of URLs on incoming request to IIS7.5, in fact, there's a built in rule template to start from. But how about outbound (without changing the code)? This is where I got to so far: <outboundRules> <rule name="Outbound lowercase" preCondition="IsHTML" enabled="true"> <match filterByTags="A" pattern="[A-Z]" ignoreCase="false" /> <action type="Rewrite" value="{ToLower:{R:0}}" /> </rule> <preConditions> <preCondition name="IsHTML"> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" /> </preCondition> </preConditions> </outboundRules> However, IIS barfs on the action with a 500 implying an invalid web.config, probably on the {ToLower:XXXX} which I stole from the MS-supplied inbound rule template. Anyone know how to do this? Anyone know where the options are fully documented (my GoogleNinja skills failed me: I found this but "Specifies value syntax for the rule. This element is available only for the Rewrite action type" is not really comprehensive). Thanks, Damian

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • How do I use a jQuery not selector to select relative URLs?

    - by Matt
    I'm working on a little jQuery script to add Google Analytics pageTracker onclick data to all relative URLs on my forum, allowing me to track clicks to external sites. I don't want to add the onclick to internal links on forum.sitename or sitename, and I don't want to add them to any hrefs marked # or that start with /. My script below works nicely, but for one minor problem! All of the forum's URLs are relative and don't start with /. I appear to have no way to change that, so need to modify the jQuery below to prevent it adding the onclick to links like as it currently does. What I want to do, is to write a .not() function like .not("[href!^=http") to prevent jQuery from adding the onclick to any hrefs which do not start with http. However, .not() appears not to support this. I'm new to jQuery and can't figure this out. Any pointers would be massively appreciated. $(document).ready(function(){ // Get URL from a href var URL = $("a").attr('href'); // Add pageTracker data for GA tracking $("a") .not("[href^=#]") .not("[href^=http://forum.sitename]") .not("[href^=http://www.sitename]") .attr("onclick","pageTracker._trackEvent('Outgoing_Links', 'Forum', " + URL + ");") ; }); Thanks!

    Read the article

  • how to: re-assemble machine generated classes from xsd files to their original nested state.

    - by Paul Connolly
    Hi everyone, I'm working in Visual Studio 2008 using c#. Let's say I have 2 xsd files e.g "Envelope.xsd" and "Body.xsd" I create 2 sets of classes by running xsd.exe, creating something like "Envelope.cs" and "Body.cs", so far so good. I can't figure out how to link the two classes to serialize (using XmlSerializer) into the proper nested xml, i.e: I want: <Envelope><DocumentTitle>Title</DocumentTitle><Body>Body Info</Body></Envelope> But I get: <Envelope><DocumentTitle>Title</DocumentTitle></Envelope><Body>Body Info</Body> Could someone perhaps show me how the two .cs classes should look to enable XmlSerializer to runt the desired nested result? Thanks a million Paul

    Read the article

  • How do I serve nested static content on Heroku?

    - by Matthew Murdoch
    I have a rails application with static content in the public directory (e.g. public/index.html) and additional static content in nested subdirectories (e.g. public/one/two/index.html). All the static content is served correctly if I run it locally via script/server but when I upload it to Heroku the top-level page loads correctly but the nested content returns a 404. I've found a number of resources (for example this question) which discuss static content in rails but they all seem to assume a fairly simple structure with a single directory containing all the files. Is there any way I can fix this?

    Read the article

  • How can I save BioPerl sequence nested features in genbank or embl format?

    - by Ryan Thompson
    In BioPerl, a sequence object can have any number of features, and each of these can have subfeatures nested within them. For example, a feature may be a complete coding sequence of a gene, and its subfeatures might be individual exons that are concatenated to form the full coding sequence. However, when I use BioPerl to write a sequence object to a file in genbank or embl format, only the top-level features are written to the file, not the sub-features nested within the top-level features. How can I store my subfeatures in sequence files? Should I just convert all my subfeatures into top-level features, and then reconstruct the tree structure next time I read in the sequence?

    Read the article

  • In a Maven project, what are reasons for either a nested or a flat directory layout?

    - by Hanno Fietz
    As my Maven project grows, I'm trying to stay on top of the project structure. So far, I have a nested directory layout with 2-3 levels, where there's a POM on each level with module entries corresponding to the directories at that level. POM inheritance (parent property) does not necessarily follow this, and is not relevant for the purpose of this question. Now, while the nested structure seems pretty natural to Maven, and it's nice and clean as long as you are on one particular level, I'm starting to get confused by what I look at in my IDE (Eclipse and IntelliJ IDEA). I had a look at the Apache Felix sources, and they have a pretty complex project in what seems to be a flat directory structure, so I'm wondering if this would be a better way to go. What are some pros and cons for either approach that you have experienced in practice? Note that this question (which I found meanwhile) seems to be very similar. I'll leave it to the community to decide whether this should be closed as a duplicate.

    Read the article

  • VS2005 SQL Syntax highlighting is incorrect for nested comments?

    - by MattH
    I'm working with VS2005, and SSMS 2005. SQL Server allows nested comments as follows: /* Comment 1 /* Comment 2 */ Some commented out code here */ This code runs fine. However if putting the above into a .sql file in VS2005, it incorrectly shows the commented out code as 'active', (its not green). It seems that StackOverflow has highlighted the code in the same way. Is this a bug in VS2005? Or does SSMS handle nested comments differently compared to the ANSI SQL standards? Can someone clarify this discrepancy, and if it appears to be a bug, if there a way to fix the syntax highlighting?

    Read the article

  • Is it possible to create nested classes in PHP as it is in C#?

    - by Edward Tanguay
    In C# you can have nested classes like this, which are useful if you have classes which do not have meaning outside the scope of one particular class, e.g. in a factory pattern: public abstract class BankAccount { private BankAccount() {} private sealed class SavingsAccount : BankAccount { ... } private sealed class CheckingAccount : BankAccount { ... } public BankAccount MakeSavingAccount() { ... } public BankAccount MakeCheckingAccount() { ... } } Is this possible in PHP? I've read that it was planned for PHP 5, then cancelled, then planned again, but can't find definitive info. Does anyone know how to create nested classes (classes within the scope of another class) as in the above C# example using PHP 5.3?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >