Search Results

Search found 20409 results on 817 pages for 'url routing'.

Page 169/817 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • RRAS Problem routing to central site from RRAS server only?

    - by TomTom
    Given is an office connected to headquarters using a RRAS bridge (2 virtual machines using RRAS to route between the two networks). Naming: The office is A, the RRAS on A is a-lnk. THe headquartters is B, b-lnk the RRAS machine there. The VPN works perfectly - machines can ping and work between the sites. Domain controllers on both ends replicating, DFS working, remote desktop working. All in all... everything is fine. EXCEPT: a-lnk itself can not reach any machine in B. This would normally not be troublesome (noone ever does anything on a-lnk), but there are two exceptions: * a-lnk is supposed to get it's license from a KMS in B, so not being able to reach B means it is not prolonging. * a-lnk is supposed to pull updates from a WSUS in B - and not being able to reach B means - no updates. Given that thigns work (and security is a minor issue - A-lnk is not reachable from the internet as it is behing a NAT hardware anyway) this got not handled for months. I just wan to get this item ticked off now. Anyone an idea what this is? It definitely is not a "dns does not work" or "routing in general is bad" item, as any computer in A can connect to any computer in B, and the other way arount - only the RRAS computer itself seems to do something really awkward. Platform for both: 2008 R2 standard.

    Read the article

  • A particular url on a website suddenly disappeared from google search results - why?

    - by Ragavendran Ramesh
    I have a website which had a particular page url that was indexed in google search results - in the first 10 results. Suddenly it disappeared. Now that page is not even in the first 100 results. What would be the reason? I am feeling that the page has be spammed by our competitors. Is it possible to avoid that, or can I find if that page has been spammed or not? Is it possible to find the particular page in a website is spam or malicious?

    Read the article

  • How to get rid of crawling errors due to the URL Encoded Slashes (%2F) problem in Apache

    - by user14198
    The Google web crawler has indexed a whole set of URLs with encoded slashes (%2F) for our site. I assume it has picked up the pages from our XML sitemap file. The problem is that the live pages will actually result in a failure because of the Url Encoded Slashes Problem in Apache. Some solutions are mentioned here We are implementing a 301 redirect scheme for all the error pages. This should make the Google bot delete the pages from the crawling errors (no more crashing pages). Does implementing the 301s require the pages to be "live"? In that case we may be forced to implement solution 1 in the article. The problem is that solution 1 will pose a security vulnerability..

    Read the article

  • L'Internet totalement coupé en Libye, y compris à l'intérieur du territoire, les URL de bit.ly en souffriront-elles ?

    L'Internet totalement coupé en Libye, y compris à l'intérieur du territoire, les URL de bit.ly en souffriront-elles ? Mise à jour du 04.03.2011 par Katleen Alors que la situation numérique s'était arrangée en Libye il y a une dizaine de jours, l'escalade de violences qui s'y déroule actuellement semble avoir tout remis en question. En effet, d'après plusieurs spécialistes, il n'y a plus aucun trafic Internet sortant ou entrant dans le pays. [IMG]http://www.renesys.com/blog/assets_c/2011/03/latencies_Libya2_AllSources_c-thumb-400x342-275.png[/IMG] Diverses entreprises dans les secteurs de la sécurité informatique et de la surveillance des réseaux ont constaté que l'espace...

    Read the article

  • We've had our content copied under a different URL - why and what do we do?

    - by Shaun
    We have a problem. We've noticed a large amount of traffic showing up on our Google Analytics. Upon further investigation we have found that we've had our content copied under a different URL. Our site: http://www.targetis.co.uk The coppied site: http://www.target-is.com (isn't showing up with Chrome for us) We don't own this domain. Their content is hosted with them (not via proxy). The large part of the traffic is coming from video hosting site. What do we do?

    Read the article

  • multiple languages same pages shall I change the page URL path as well?

    - by Athanatos
    We own multiple country code top-level domains for our website e.g DE, UK ,FR. When someone visits for one of those domains they redirect to .com and the language automatically changes for the first time to the one from the originating domain. Also users can change the language from the .com website using a dropdown, however the page URI stays exactly the same e.g service.php. How will that be indexed in Google ? Will all the different language will be indexed or only the default lang (English) ? Is it recommended for SEO purposes to do something with the page URL (even using the htaccess maybe) so that I can also append to the title or page name the language ? e.g service.php?lang=fr

    Read the article

  • Users using Perl script to bypass Squid Proxy

    - by mk22
    The users on our network have been using a perl script to bypass our Squid proxy restrictions. Is there any way we can block this script from working?? #!/usr/bin/perl ######################################################################## # (c) 2008 Indika Bandara Udagedara # [email protected] # http://indikabandara19.blogspot.com # # ---------- # LICENCE # ---------- # This work is protected under GNU GPL # It simply says # " you are hereby granted to do whatever you want with this # except claiming you wrote this." # # # ---------- # README # ---------- # A simple tool to download via http proxies which enforce a download # size limit. Requires curl. # This is NOT a hack. This uses the absolutely legal HTTP/1.1 spec # Tested only for squid-2.6. Only squids will work with this(i think) # Please read the verbose README provided kindly by Rahadian Pratama # if u r on cygwin and think this documentation is not enough :) # # The newest version of pget is available at # http://indikabandara.no-ip.com/~indika/pget # # ---------- # USAGE # ---------- # + Edit below configurations(mainly proxy) # + First run with -i <file> giving a sample file of same type that # you are going to download. Doing this once is enough. # eg. to download '.tar' files first run with # pget -i my.tar ('my.tar' should be a real file) # + Run with # pget -g <URL> # # ######################################################################## ######################################################################## # CONFIGURATIONS - CHANGE THESE FREELY ######################################################################## # *magic* file # pls set absolute path if in cygwin my $_extFile = "./pget.ext" ; # download in chunks of below size my $_chunkSize = 1024*1024; # in Bytes # the proxy that troubles you my $_proxy = "192.168.0.2:3128"; # proxy URL:port my $_proxy_auth = "user:pass"; # proxy user:pass # whereis curl # pls set absolute path if in cygwin my $_curl = "/usr/bin/curl"; ######################################################################## # EDIT BELOW ONLY IF YOU KNOW WHAT YOU ARE DOING ######################################################################## use warnings; my $_version = "0.1.0"; PrintBanner(); if (@ARGV == 0) { PrintHelp(); exit; } PrimaryValidations(); my $val; while(scalar(@ARGV)) { my $arg = shift(@ARGV); if($arg eq '-h') { PrintHelp(); } elsif($arg eq '-i') { $val = shift(@ARGV); if (!defined($val)) { printf("-i option requires a filename\n"); exit; } Init($val); } elsif($arg eq '-g') { $val = shift(@ARGV); if (!defined($val)) { printf("-g option requires a URL\n"); exit; } GetURL($val); } elsif($arg eq '-c') { $val = shift(@ARGV); if (!defined($val)) { printf("-c option requires a URL\n"); exit; } ContinueURL($val); } else { printf ("Unknown option %s\n", $arg); PrintHelp(); } } sub GetURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my %mapExt; my $first; my $readLen; my $ext = GetExt($fileName); ReadMap($_extFile, \%mapExt); if ( exists($mapExt{$ext})) { $first = $mapExt{$ext}; GetFile($URL, $first, $fileName, 0); } else { die "Unknown ext in $fileName. Rerun with -i <fileName>"; } } sub ContinueURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my $fileSize = 0; $fileSize = -s $fileName; printf("Size = %d\n", $fileSize); my $first = -1; if ( $fileSize > 0 ) { $fileSize -= 1; GetFile($URL, $first, $fileName, $fileSize); } else { GetURL($URL); } } sub Init { my ($fileName) = @_; my ($key, $value); my %mapExt; my $ext = GetExt($fileName); if ( $ext eq "") { die "Cannot get ext of \'$fileName\'"; } ReadMap($_extFile, \%mapExt); my $b = GetFirst($fileName); $mapExt{$ext} = $b; WriteMap($_extFile, \%mapExt); print "I handle\n"; while ( ($key, $value) = each(%mapExt) ) { print "\t$key -> $value\n"; } } sub GetExt { my ($name) = @_; my @x = split(/\./, $name); my $ext = ""; if (@x != 1) { $ext = pop @x; } return $ext; } sub ReadMap { my($fileName, $mapRef) = @_; my $f; my @arr; open($f, '<', $fileName) or die "Couldn't open $fileName"; my %map = %{$mapRef}; while (<$f>) { my $line = $_; chomp($line); @arr = split(/[ \t]+/, $line, 2); $mapRef->{ $arr[0]} = $arr[1]; } printf("known ext\n"); while (($key, $value) = each(%$mapRef)) { print("$key, $value\n"); } close($f); } sub WriteMap { my ($fileName, $mapRef) = @_; my $f; my @arr; open($f, '>', $fileName) or die "Couldn't open $fileName"; my ($k, $v); while( ($k, $v) = each(%{$mapRef})) { print $f "$k" . "\t$v\n"; } close($f); } sub PrintHelp { print "usage: -h Print this help -i <filename> Initialize for this filetype -g <URL> Get this URL\n -c <URL> Continue this URL\n" } sub GetFirst { my ($fileName) = @_; my $f; open($f, "<$fileName") or die "Couldn't open $fileName"; my $buffer = ""; my $first = -1; binmode($f); sysread($f, $buffer, 1, 0); close($f); $first = ord($buffer); return $first; } sub GetFirstFromMap { } sub GetFileName { my ($URL) = @_; my @x = split(/\//, $URL); my $fileName = pop @x; return $fileName; } sub GetChunk { my ($URL, $file, $offset, $readLen) = @_; my $end = $offset + $_chunkSize - 1; my $curlCmd = "$_curl -x $_proxy -u $_proxy_auth -r $offset-$end -# \"$URL\""; print "$curlCmd\n"; my $buff = `$curlCmd`; ${$readLen} = syswrite($file, $buff, length($buff)); } sub GetFile { my ($URL, $first, $outFile, $fileSize) = @_; my $readLen = 0; my $start = $fileSize + 1; my $file; open($file, "+>>$outFile") or die "Couldn't open $outFile to write"; if ($fileSize <= 0) { my $uc = pack("C", $first); syswrite ($file, $uc, 1); } do { GetChunk($URL, $file, $start ,\$readLen); $start = $start + $_chunkSize; $fileSize += $readLen; }while ($readLen == $_chunkSize); printf("Downloaded %s(%d bytes).\n", $outFile, $fileSize); close($file); } sub PrintBanner { printf ("pget version %s\n", $_version); printf ("There is absolutely NO WARRANTY for pget.\n"); printf ("Use at your own risk. You have been warned.\n\n"); } sub PrimaryValidations { unless( -e "$_curl") { printf("ERROR:curl is not at %s. Pls install or provide correct path.\n", $_curl); exit; } unless( -e "$_extFile") { printf("extFile is not at %s. Creating one\n", $_extFile); `touch $_extFile`; } if ( $_chunkSize <= 0) { printf ("Invalid chunk size. Using 1Mb as default.\n"); $_chunkSize = 1024*1024; } }

    Read the article

  • Why does fileURLWithPath: give me a file://localhost/ URL?

    - by jxpx777
    I have a project I'm working on and this seems like the simplest thing in the world, but the +[NSURL fileURLWithPath:] factory method is returning a strange URL. I created an empty sample project to isolate the problem and in my app delegate's applicationDidFinishLaunching: method I have this simple code: NSString *path = [@"~/Documents" stringByExpandingTildeInPath]; NSURL *url = [NSURL fileURLWithPath:path]; NSLog(@"%@ | %@", path, url); and the NSLog result looks like this: /Users/myusername/Documents | file://localhost/Users/myusername/Documents/ when I would expect the URL to be file:///Users/myusername/Documents. Any thoughts on why this is behaving like this? (10.6.3 in case it matters.)

    Read the article

  • How can I extract a URL from a sentence that is in a NSString?

    - by 0SX
    What I'm trying to accomplish is as follows. I have a NSString with a sentence that has a URL within the sentience. I'm needing to be able to grab the URL that is presented within any sentence that is within a NSString so for example: Let's say I had this NSString NSString *someString = @"This is a sample of a http://abc.com/efg.php?EFAei687e3EsA sentence with a URL within it."; I need to be able to extract http://abc.com/efg.php?EFAei687e3EsA from within that NSString. This NSString isn't static and will be changing structure and the url will not necessarily be in the same spot of the sentence. I've tried to look into the three20 code but it makes no sense to me. How else can this be done? Thanks for help.

    Read the article

  • Wordpress auto-generated "canonical" links - how to add a custom URL parameter?

    - by kiko
    Hello - Does anyone know how to modify the Wordpress canonical links to add a custom URL parameter? I have a Wordpress site with a page that queries a separate (non-Wordpress) database. I passed the URL parameter "pubID" to display individual books and it is working OK. Example: http://www.uglyducklingpresse.org/catalog/browse/item/?pubID=63 But the individual book pages are not showing up properly in Google - the ?pubID parameter is stripped out. I think maybe this is because all the item pages have the same auto-generated "canonical" URL link tag in the source - one with the "pubID" parameter stripped out. Example: link rel='canonical' href='http://www.uglyducklingpresse.org/catalog/browse/item/' Is there a way to perhaps edit .htaccess to add a custom URL parameter to Wordpress, so that the parameter is not stripped out by permalinks and the "canonical" links? Or maybe there's another solution ... Thank you for any ideas!

    Read the article

  • Retrieve a list of the most popular GET param variations for a given URL?

    - by jamtoday
    I'm working on building intelligence around link propagation, and because I need to deal with many short URL services where a reverse-lookup from an exact URL address is required, I need to be able to resolve multiple approximate versions of the same URL. An example would be a URL like http://www.example.com?ref=affil&hl=en&ct=0 Of course, changing GET params in certain circumstances can refer to a completely different page, especially if the GET params in question refer to a profile or content ID. But a quick parse of the page would quickly determine how similar the pages were to each other. Using a bit of machine learning, it could quickly become clear which GET params don't effect the content of the pages returned for a given site. I'm assuming a service to send a URL and get a list of very similar URLs could only be offered by the likes of Google or Yahoo (or Twitter), but they don't seem to offer this feature, and I haven't found any other services that do. If you know of any services that do cluster together groups of almost identical URLs in the aforementioned way, please let me know. My bounty is a hug.

    Read the article

  • How to parse a HTML file at a URL?

    - by Warrior
    I am new to iphone development.I am able to parse a Xml file at a URL and retrieve it contents from a particular nodes. For Parsing at url NSString * path = @"xxxxxxxxxxxxxxxxxxxxxx"; [self parseXMLFileAtURL:path]; For retrieving the data i use NSXMLParser .How can i achieve the same thing if i have HTML file at my URL(Source code of the webpage is HTML).Please help me out.Thanks.

    Read the article

  • how to change link url on the web browser's status bar.

    - by sunglim
    I already read many article about this issue in here, SO. I just want to discuss how to do it. NOT the moral issue. -- for example. at the google search webpage. before I click the link, the link does not indicate the google url. but After I click the link with shift-key, the url on the status bar is changed. this mean the google webpage indicate 'Fake URL'. the google compressed script is too difficult to read and analyze. # edited The second url should work on ie8 even if I click with ctrl key.

    Read the article

  • Clicking on viewlist link in email alert sent for postlist redirecting to http://url/blogs/Lists /Po

    - by Sarita Mishra
    Hi, We have a Blogs site and post list. Users subscribes to the list and get email alert whenever any change made to the post list. In the email alert sent contains the heading giveb below : Modify my alert settings| View The ‘Colour of Energy’ – now on ...| View Posts View The ‘Colour of Energy’ – now on ... is the link for the post for which user has get the email alert. It is redirecting to the URL ://url/blogs/Lists /Posts/Dispform.aspx?ID=x, which is giving Page cannot be found error. It should redierct to ://url/blogs/Lists /Posts/Post.aspx?ID=x. I want to change the hyperlink URL to the above one. Please suggest as how to proceed with that.

    Read the article

  • Error while trying to parse a website url using python . how to debug it ?

    - by mekasperasky
    #!/usr/bin/python import json import urllib from BeautifulSoup import BeautifulSoup from BeautifulSoup import BeautifulStoneSoup import BeautifulSoup def showsome(searchfor): query = urllib.urlencode({'q': searchfor}) url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query search_response = urllib.urlopen(url) search_results = search_response.read() results = json.loads(search_results) data = results['responseData'] print 'Total results: %s' % data['cursor']['estimatedResultCount'] hits = data['results'] print 'Top %d hits:' % len(hits) for h in hits: print ' ', h['url'] resp = urllib.urlopen(h['url']) res = resp.read() soup = BeautifulSoup(res) print soup.prettify() print 'For more results, see %s' % data['cursor']['moreResultsUrl'] showsome('sachin') What is the wrong in this code ? Note all the 4 links that I am getting out of the search , I am feeding it back to extract the contents out of it , and then use BeautifulSoup to parse it . How should I go about it ?

    Read the article

  • What is the simplest way to generate domain specific url from application path..?

    - by harsh
    I have application specific url like below ~/Default.aspx ~/Manage/Page.aspx ~/Manage/Account/Default.aspx I really don't know what are these kind of paths actually called. Now I need them to convert to domain specific complete URL. No ../ or ../../ like thing in the URL. I want URLs like http://www.example.com/Default.aspx http://www.example.com/Manage/Page.aspx http://www.example.com/Manage/Account/Default.aspx Currently I am doing this following way (assuming I have HttpRequest object) Request.Url.Host + path.Substring(1); Is there a more simplest way to achieve this..?

    Read the article

  • python global variable not working in apache

    - by Suhail
    I am facing issue with the global variable, when i run in the django development server it works fine, but in apache it doesn't work here is the code below: red= "/foodfolio3/test/" def showAddRecipe(request): #global objc if "userid" in request.session: objc["ErrorMsgURL"]= "" try: urlList= request.POST URL= str(urlList['url']) URL= URL.strip('http://') URL= "http://" + URL recipe= __addRecipeUrl__(URL) if (recipe == 'FailToOpenURL') or (recipe == 'Invalid-website-URL'): #request.session["ErrorMsgURL"]= "Kindly check URL, Please enter a valid URL" objc["ErrorMsgURL"]= "Kindly check URL, Please enter a valid URL" print "here global_context =", objc arurl= HttpResponseRedirect("/foodfolio3/add/import/") arurl['ErrorMsgURL']= objc["ErrorMsgURL"] #return HttpResponseRedirect("/foodfolio3/add/import/") #return render_to_response('addRecipeUrl.html', objc, context_instance = RequestContext(request)) return (arurl) else: objc["recipe"] = recipe return render_to_response('addRecipe.html', objc, context_instance = RequestContext(request)) except: objc["recipe"] = "" return render_to_response('addRecipe.html', objc, context_instance = RequestContext(request)) else: global red red= "/foodfolio3/add/" return HttpResponseRedirect("/foodfolio3/login") def showAddRecipeUrl(request): if "userid" in request.session: return render_to_response('addRecipeUrl.html', objc, context_instance = RequestContext(request)) else: global red red= "/foodfolio3/add/import/" return HttpResponseRedirect("/foodfolio3/login") def showLogin(request): obj = {} obj["error_message"] = "" obj["registered"] = "" if request.method == "POST": if (red == "/foodfolio3/test"): next= '/foodfolio3/recipes' else: next= red try: username = request.POST['username'] password = request.POST['password'] user = authenticate(username=username, password=password) except: user = authenticate(request=request) if user is not None: if user.is_active: login(request, user) request.session["userid"] = user.id # Redirect to a success page. return HttpResponseRedirect(next) this code works fine in django development server, but in apache, the url is getting redirected to '/foodfolio3/recipes'

    Read the article

  • javax.xml.ws.soap.SOAPFaultException: Could not send Message - at JaxWsClientProxy.invoke - caused by HTTP response code: 401 for URL

    - by Mikkis
    I moved a working code from dev to test and encountered the following error(s) in test: javax.xml.ws.soap.SOAPFaultException: Could not send Message. at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:143) ...... at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:236) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:472) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:302) at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:254) at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:73) at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:123) at $Proxy739.copyIntoItems(Unknown Source) Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: http:///_vti_bin/Copy.asmx at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379) at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:2046) Environment specs: Java 1.6 Tomcat 6 Eclipse Helios Maven2 CXF 2.2.3 As a background work, tried to explore about the error in similar category bad URL (ruled out as i am using same URL in dev and test. and the url, userid, password are all accessible from both the machines), connection timeout( error is not 404 or it doesnt specify connection timed out... it says 401 response code for url) Checked if all the jars and same versions are included in the test environment. Can someone shed some light to understand and resolve the error? please let me know if any more details are to be included.

    Read the article

  • Wordpress auto-generated "canonical" link - get them to use a custom URL parameter?

    - by kiko
    Hello - Does anyone know how to modify the Wordpress canonical links to add a custom URL parameter? I have a Wordpress site with a page that queries a separate (non-Wordpress) database. I passed the URL parameter "pubID" to display individual books and it is working OK. Example: http://www.uglyducklingpresse.org/catalog/browse/item/?pubID=63 But the individual books are not showing up properly in Google - the ?pubID parameter is stripped out. I think maybe this is because all the item pages have the same auto-generated "canonical" URL link tag in the source - one with the "pubID" parameter stripped out. Example: link rel='canonical' href='http://www.uglyducklingpresse.org/catalog/browse/item/' Is there a way to perhaps edit .htaccess to add a custom URL parameter to Wordpress, so that the parameter is not stripped out by permalinks and the "canonical" links? Or maybe there's another solution ... Thank you for any ideas!

    Read the article

  • .Net MVC - Restfull URL's - The specified path, file name, or both are too long. The fully qualified

    - by Truegilly
    Hello, im creating a MVC application thats following a restfull URL approach Im am experiencing the following error... "The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters." This error occurs when my URL length = 225 chars. Surly I can have much longer URL's without this problem and doesn’t this relate to file paths rather than URL's ? Im sure some of you MVC guys have experienced this ;) is there a way round it ?? where am i going wrong ?? thank you for your time Truegilly

    Read the article

  • Javascript match part of url, if statement based on result.

    - by nick
    Here is an example of the url i'm trying to match: http://store.mywebsite.com/folder-1/folder-2/item3423434.aspx What im trying to match is http: //store.mywebsite.com/folder-1 except that "folder-1" will always be a different value. I can't figure out how to write an if statement for this: Example (pseudo-code) if(url contains http://store.mywebsite.com/folder-1) do this else if (url contains http://store.mywebsite.com/folder-2) do something else etc

    Read the article

  • In Drupal, can you control block display according to e.g. number of URL parts?

    - by james6848
    I'm having a little trouble controlling page-specific block display in Drupal... My URL's will be of this typical structure: http://www.mysite.co.uk/section-name/sub-page/sub-sub-page The 'section-name' will effectively be fixed, but there will be many sub-pages (far too many to explicitly reference). I need to somehow control block display as follows: One block will show on all pages where URL contains 'section-name/sub-page' but not on pages 'section-name/sub-page/sub-sub-page' Conversely, another block will show on all pages where URL contains 'section-name/sub-page/sub-sub-page' but not on pages 'section-name/sub-page' My only idea is to do a bit of PHP that looks for the string 'section-name' and then also counts URL parts (or even the number of slashes). Not sure how to implement that though :) Your help would be appreciated!

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >