Search Results

Search found 9935 results on 398 pages for 'about pages'.

Page 94/398 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • A PDF viewer for large margins in fullscreen

    - by jmn
    I am looking for a way to pleasantly read PDF files on my widescreen (22" 1680x1050) monitor. My problem with all pdf the PDF-viewer applications I have tried is that they do not handle wide and high margins well. If I go to fullscreen mode in my viewer and zoom in so that the extra margins are cropped, I can view the pages nicely, the annoyance however is that I have to reposition the pages every time I navigate to another page. I am sure there must be a way to make a PDF viewer that can solve this problem and perhaps there is one you know of? I am aware of something called PDF Reflow in Acrobat Reader but that only works with certain specific (tagged) files. I want a PDF viewer with a smarter zoom/next page function or an automatic margin-crop function. Is there such a thing?

    Read the article

  • how to print labels from UPS printer on UPS website

    - by paynes_bay
    I have several computers, in my office, that have UPS printers attached to them. On most of these computers if you go to ups.com, login, and print a shipping label out, it prints out just fine. The website somehow selects the appropriate printer and prints to it. It doesn't present a prompt, asking you to select the printer, the number of pages, etc - it just prints it. Only problem: there's one computer on which it's not doing this and I don't know why. I can see the printer in Printers and Faxes and can print out test pages from the Properties tab, so the printer clearly works - it just isn't printing out from UPS's website. Any ideas?

    Read the article

  • Apache won't start after creating symbolic link

    - by Carlin
    I'm installing apache for the first time and trying to display some webpages on localhost. Apache's default path for serving web pages is /var/www/html/ but I don't have permissions to write there. Rather than change ownership of the entire directory, I decided to get rid of the /html/ folder in /var/www/ and created it in my home directory. Then I made a symbolic link ln -s /home/me/html/ /var/www/ hoping Apache would serve web pages from my home directory while keeping the default path and following the symbolic link to my home directory. When I go to start the apache service with service httpd start I get Job failed. See system journal and 'systemctl status' for details.

    Read the article

  • My virtualhost not working for non-www version

    - by johnlai2004
    I have a development web server (ubuntu + apache) that can be accessed via the url glacialsummit.com. For some reason, http://www.glacialsummit.com serves pages from the /srv/www/glacialsummit.com/ directory, but http://glacialsummit.com serves pages from the /var/www/ directory. Here's what some of my virtualhost config files look like filename: /etc/apache2/sites-enabled/glacialsummit.com <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined </VirtualHost> <VirtualHost 97.107.140.47:443> ServerAdmin [email protected] ServerName glacialsummit.com ServerAlias www.glacialsummit.com DocumentRoot /srv/www/glacialsummit.com/public_html/ ErrorLog /srv/www/glacialsummit.com/logs/error.log CustomLog /srv/www/glacialsummit.com/logs/access.log combined SSLEngine on SSLCertificateFile /etc/ssl/localcerts/www.glacialsummit.com.crt SSLCertificateKeyFile /etc/ssl/localcerts/www.glacialsummit.com.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost 97.107.140.47:80> ServerAdmin [email protected] ServerName project.glacialsummit.com ServerAlias www.project.glacialsummit.com DocumentRoot /srv/www/project.glacialsummit.com/public_html/ ErrorLog /srv/www/project.glacialsummit.com/logs/error.log CustomLog /srv/www/project.glacialsummit.com/logs/access.log combined </VirtualHost> ## i have many other vhosts that work fine in this file filename /etc/apache2/sites-enabled/000-default <VirtualHost 97.107.140.47:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> filename: /etc/apache2/ports.conf NameVirtualHost 97.107.140.47:80 Listen 80 <IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here Listen 443 </IfModule> How do I make http://glacialsummit.com serve web pages from /srv/www/glacialsummit.com/public_html just like http://www.glacialsummit.com?

    Read the article

  • Kindle 2 and PDFs in landscape

    - by doronkatz
    Hi guys, I am looking at getting a Kindle 2, read a lot about the PDF support (or lack off) and wanted to ask someone who has a kindle, a question. If you read a pdf in landscape mode, does it shrink the text to have it all in the one screen, or does it increase font size and split it into two or more pages. I have another reader, the iRiver Story and it does that, splits it into multiple pages thus making it readable. I know you can't zoom or anything like that in portrait view (i assume) I know you will say stick with iRiver, but the make of the kindle is a lot better (metallic back) and its useful to have a hybrid amazon book/pdf reader in one.

    Read the article

  • apache2: Require valid-user for everything except "special_page"

    - by matt wilkie
    With Apache2 how may I require a valid user for every page except these special pages which can be seen by anybody? Thanks in advance for your thoughts. Update in response to comments; here is a working apache2 config: <Directory /var/www/> Options Indexes FollowSymLinks MultiViews Order allow,deny allow from all </Directory> # require authentication for everything not specificly excepted <Location / > AuthType Basic AuthName "whatever" AuthUserFile /etc/apache2/htpasswd Require valid-user AllowOverride all </Location> # allow standard apache icons to be used without auth (e.g. MultiViews) <Location /icons> allow from all Satisfy Any </Location> # anyone can see pages in this tree <Location /special_public_pages> allow from all Satisfy Any </Location>

    Read the article

  • Batch processing multi-TIFF in Irfan view

    - by hemalshah
    I have to convert DPI of more than 5k Tiff images on a monthly basis from 200x200 to 100x100. I can do that in Irfan view using a .bat file that i have created.. the following is the .BAT file code @"c:\program files\irfanview\i_view32.exe" "e:\batch1*.tif /aspectratio /resample /tifc=4 /dpi=(100,100) /convert=e:\batch2*.tif" %* Where tifc=4 is Fax 4 compression However, the above code doesn't help me change the DPI for other pages except for Only the first page in the tiff thats getting converted to 100 DPI. Rest all pages are still 200 DPI. I am using WinXP Professional and Irfan View. Can anyone tell me what I am missing. Or any other alternative program where I can create a .bat file and run the batch process using Command line?

    Read the article

  • Google Chrome Automatic Page Load

    - by WDC
    Google Chrome keeps loading the second page of certain websites which is helpful on sites like Netflix so that instead of clicking on a link for page 2, I can just continue scrolling down and it'll automatically have the next page ready for me. But on online clothing sites, it just gets backed up loading all the next pages, misplacing links and loading the next page so that it actually replaces the page I'm trying to view literally every millisecond. Considering clothing sites generally have upwards of 20 to 200 pages of clothing, this is really annoying since Chrome tries to load all of them. How do I turn off the automatic page load???

    Read the article

  • SharePoint 2010 Search - not search additional content sources

    - by Chris W
    I've got SP 2010 crawling a secondary intranet system that we'll run in parallel as part of a long running migration to SharePoint when it releases. Whilst it's crawling the pages without problem I can't see how to get the results to appear as part of the Quick Search results if the user does a search from the little search dialog box on the home page. Searches completed within a My Sites pages lists results from port the SharePoint installation and the external content source. Searches from the main search dialog only list results of SharePoint items. I tried adding the drop down option to select the site to search but this list only includes the name of the current site and doesn't offer an 'All Sites' scope option which I think would include the content. What am I doing wrong?

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Keeping websites from knowing where I live

    - by D Connors
    This questions is related to issues and practicality, not security. I live in Brazil and, apparently, every single website I visit knows about it. Usually that's okay. But there are quite a few sites that don't make use of that information adequately. For instance: Bing keeps thinking that Brazilian pages are way more relevant to me than American ones (which they're not). google.com always redirects me to google.com.br Microsoft automatically sends me to horribly translated support pages in Portuguese (which would just be easier to read in English). These are just a few examples. Usually it's stuff I can live with (or work around), but some of them are just plain irritating. I have geolocation disabled in Firefox, so I guess they're either getting this information from my IP or from Windows itself (which I bought here). Is there a way to avoid this? Either tell them nothing or make them think I live somewhere else?

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org//ht/d/sp/i/17770

    Read the article

  • A PDF viewer for large margins in fullscreen

    - by jmn
    I am looking for a way to pleasantly read PDF files on my widescreen (22" 1680x1050) monitor. My problem with all pdf the PDF-viewer applications I have tried is that they do not handle wide and high margins well. If I go to fullscreen mode in my viewer and zoom in so that the extra margins are cropped, I can view the pages nicely, the annoyance however is that I have to reposition the pages every time I navigate to another page. I am sure there must be a way to make a PDF viewer that can solve this problem and perhaps there is one you know of? I am aware of something called PDF Reflow in Acrobat Reader but that only works with certain specific (tagged) files. I want a PDF viewer with a smarter zoom/next page function or an automatic margin-crop function. Is there such a thing?

    Read the article

  • Open source CMS for a university department

    - by Greg Kuperberg
    I realize that this type of question gets asked over and over again. Nonetheless, I want to ask a more specific version. I'm in a university math department. Long ago our sysadmins (or just one at the time) switched to a web content management system. At the time, Zope looked like an informed choice. We have used Zope for years, but at least in my opinion, it has always been a controversial decision. At the time I didn't understand why it was so important to have a web CMS. Now I see that it certainly is important, but I don't know that it should be Zope. The good (even necessary) features of Zope for us are: It's free and Linux-based. It is a true CMS and not something else (e.g. wiki or blog) It lets you write HTML and scripts. What I really don't like about Zope is that the outcome of using it is all-or-nothing in a lot of ways. At least in convenient use, it ends up dividing the enterprise into superusers who can do everything, and lusers who can't do anything (except write their own home pages in plain HTML). It has a huge user manual, which end users won't have time to read. Somehow with the access permissions, the simple thing to do is to let a few admins access all of the source and data and that's it. Since this is a math department, the user base varies from real novices to people who understand computers reasonably well. But as it stands, any change that involves Zope has to go through the sysadmins. When the sysadmins are in a hurry, sometimes they will also just add plain HTML pages to the web site instead of using the Zope framework. It doesn't help matters that Zope is fairly disk-intensive and fairly hype-intensive. Not to dwell on Zope too much, but I am wondering what is the right web CMS for a mixed user base of terminal novices, quick studies, and experienced users. Some users might want intermediate permissions, e.g. read permission but not write permission, or permission to change some subset of the pages or see some subset of the database tables. Also it should be Linux-based and open source and a little bit scalable, and of course widely used and well-supported is a good idea. I might guess that the answer is Drupal just because that was the general answer before, but I don't know if it is the right type of CMS for this purpose. (But note that Python is a relatively popular language in a math department, among other reasons because Sage is based on Python.) I can see that I didn't completely define the question and that people are guessing what type of site it is. It is the UC Davis Math Department. The main structure of the site is not suitable for a wiki and it is also not the same thing as a course environment like Moodle. Rather, the site is mostly structured as a generic medium-small enterprise. Some components of the site could be a wiki, Moodle, LaTeX plugin, Request Tracker, etc. However, the main issue is not these components. The main issue is that it would be better to decentralize management of the site. Right now, everything that is in the Zope CMS has to go through the sysadmins. Every other user in the department either has to put in a request to them, or write their own web pages with no help from Zope. There are two main reasons for this: (1) Other people in the department don't have time to read the Zope manual. (2) It's a hassle to set up intermediate permissions in Zope. However, there are other people in the department who know how to write computer programs and use markup languages. I wouldn't want a solution that assumes that users either can't be trusted with much more than drag-and-drop, or that they are IT professionals who sleep with documentation manuals. I'm wondering if Plone/Zope still has this quality, since certainly Zope by itself does. But I also wonder sometimes if common-sense flexibility is unfashionable these days, and that things in general have be either mindlessly easy or incredibly powerful.

    Read the article

  • Mac OS X Duplex Printing Paper Handling Oddness

    - by Christian Lindig
    I like to print on stationery with a pre-printed letterhead using the Preview.app and a duplex-capable HP PostScript (Color Laserjet 4700) printer. One would think that pre-printed stationery could be placed into one of the trays and then printed on front and reverse side. Unfortunately, the print dialog handles one and two-paged documents differently: the stationery needs to be placed differently into the tray if the document contains one page versus when it contains two pages. This is not obvious when printing on plain paper but becomes obvious once you mark, say, the upper left front corner of pages and then print different documents on them. I checked the PostScript code generated and indeed it is different for one versus two-page documents with respect to duplex printing, probably causing the difference in paper handling. Obviously this makes it difficult to print pre-printed stationery in duplex mode. I expected others to have stumbled upon this but could not find specific help so far. Any ideas? This is on OS X 10.6 and I checked two different printers.

    Read the article

  • Firefox being really sluggish on php.net website?

    - by Rory
    Is it just me, or is firefox (3.5 on Ubuntu 9.10 karmic) really sluggish when opening the PHP.net website? When I have several tabs open with just the PHP.net website, and I tab up and down (with Control-PageUp/Down), it's slow to change tab. If I do it quickly, then firefox freezes for a few seconds (I know because it goes grey, which is a compiz feature to show unresponsive windows). The CPU usage also goes up when I'm tabbing to PHP.net pages. UPDATE: This appears to happen for all PHP.net webpages. For other pages, on other sites, Firefox is fine (for me).

    Read the article

  • Internet very slow when upgrading to Ubuntu 9.10

    - by roojoo
    I was running Ubuntu 8.x on my desktop and everything worked fine. Im using wired internet and it worked perfectly, pages loaded pretty fast. However, when I decided to upgrade to 9.10 the upgrade failed at some point, however I was left with what appeared to be Ubuntu 9.10. Since then the internet has been weird. When I go to a website it takes at least 10 seconds for the page to display, however if Im on a site and navigate to other pages on the website it loads quickly. This never happened prior to the upgrade. I thought this may be due to the upgrade not installing correctly so I did a fresh install of Xubuntu 9.10 but the problems are still the same. Im writing this on a Vista machine over the wireless network and internet is fine. Does anyone have any ideas of the issue? Thanks.

    Read the article

  • Windows 2008 IIS 7 PHP Caching / Blank Page Problems?

    - by darkAsPitch
    I don't even know how to explain this. The only thing I can think is 'why am I working with a windows server?' I am renting a dedicated 1and1 server - I installed PHP myself - with fast CGI and caching (pretty sure I checked OK on something about dynamic caching for PHP when I installed it.) Every few hours of intensive php processing - my pages start locking up - usually just showing blank pages - with no errors whatsoever. Just now, I checked a page - let's call it a.php - and it was showing the results of b.php - I thought I had been hacked! Simply restarting the IIS server however, fixes the problem. Any ideas / help / knowledge on similar problems with windows 2008?

    Read the article

  • Is it possible to add your own bookmarks/tabs to a PDF file?

    - by Pure.Krome
    Hi folks, I've purchases a few e-books and love it. Some come with a massive list of bookmarks (kewl!) and some not. Regardless, is there a way i can create my OWN bookmarks so i can jump to specific pages? I don't want to mess up the current list of official bookmarks that came with the e-books (where they were provided). It's like i want to add my own sticky note tabs so i can quickly jump between pages etc, without having to remember the page number. Also, this is for Adobe reader (the free thingy). If it's available in another program (eg. Foxit, please say so also :) ) cheers!

    Read the article

  • Using FastCGI for PHP on Mac OS X

    - by DanieL
    I have apache2 running on a Mac OS X (10.6) machine, and it is currently serving PHP pages fine, using php5_module but I would like to configure fastcgi_module to handle the php pages. I have tried using the configuration found on www.fastcgi.com but I get the following error: [warn] FastCGI: (dynamic) server "/Path/to/script.php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds [warn] FastCGI: server "/usr/bin/php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds I'm thinking this is because PHP has not been compiled with FastCGI, but seeing as it came with Mac OS X i'm not sure how to recompile it. Is this the problem? And if so, how do I recompile PHP with FastCGI?

    Read the article

  • HP LaserJet Pro 400 Color M451dn Phantom Print Jobs

    - by francisswest
    Scenario: Multiple printers hooked up to a printer server (2008r2) including this HP LaserJet Pro 400 Color M451dn. All machines that are using the printer are based on Windows 7 Enterprise x64. Problem: Every couple of days the users who frequent this printer let me know that a few dozen pages with random characters down one side of the paper print out. This happens usually during the evening when no one is around to send print jobs to it. What I have done: Provided the below screen shot of the printer log with what I assume is the print jobs in question. I have looked into the printer driver compatibility and found no issues. Question: Is there a known issue with this printer or similar printers, and is there a solution that people are familiar with when they see multiple pages of gibberish printing out?

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Download a website that requires log-in with HTTtrack Copier

    - by H.Moss
    Hi guys! I have been researching of how to download content of a site that requires username and password. This is actually harder than I thought it would be. I tried to use both HTTtrack Copier and followed the instruction below, but it's not working! Q: I can not access several pages (access forbidden, or redirect to another location), but I can with my browser, what's going on? A: You may need cookies! Cookies are specific data (for example, your username or password) that are sent to your browser once you have logged in certain sites so that you only have to log-in once. For example, after having entered your username in a website, you can view pages and articles, and the next time you will go to this site, you will not have to re-enter your username/password. To "merge" your personnal cookies to an HTTrack project, just copy the cookies.txt file from your Netscape folder (or the cookies located into the Temporary Internet Files folder for IE) into your project folder (or even the HTTrack folder)

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org/ht/d/sp/i/17770

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >