Search Results

Search found 593 results on 24 pages for 'wget'.

Page 3/24 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Using wget to save sequential files as well as renaming the file extension

    - by Ian
    I run a cron job that requests a snapshot from a remote webcam at a local address: wget http://user:[email protected]/snapshot.cgi This creates the files snapshot.cgi, snapshot.cgi.1, snapshot.cgi.2, each time it's run. My desired result would be for the file to be named similar to file.1.jpg, file.2.jpg. Basically, sequentially or date/time named files with the correct file extension instead of .cgi. Any ideas?

    Read the article

  • Escape a ! in the password parameter of wget

    - by Dave
    I'm trying to execute something like this: wget --user=foo --password=bar! url The ! in the password is causing problems. I've tried escaping it with \, as in --password=bar\! I've tried encapsulating in single and double quotes. I put the password in a separate file and tried --password=cat pass.txt Each time, I get a 403 Forbidden. Using -d, I see that the SSL handshake is successful. On the Windows command line, the command works. My assumption is that I need to escape the ! differently, but I don't know how else.

    Read the article

  • wget not completely processing the http call

    - by user578458
    Here is a wget command that executes a HTML / PHP stack report suite that is hosted by a third party - we don't have control over the PHP or HTML page wget --no-check-certificate --http-user=/myacc --http-password=mypass -O /tmp/myoutput.csv "https://myserver.mydomain.com/mymodule.php?myrepcode=9999&action=exportcsv&admin=myappuserid&password=myappuserpass&startdate=2011-01-16&enddate=2011-01-16&reportby=mypreferredview" All the elements are working perfectly: --http-user / --http-pass as offered by a browsers standard popup for username and password prompt -O /tmp/myoutput.csv - the output file of interest https://myserver.mydomain.com/mymodule.php?myrepcode=9999&action=exportcsv&admin=myappuserid&password=myappuserpass&startdate=2011-01-16&enddate=2011-01-16&reportby=mypreferredview" The file generated on the fly by the parameters myrepcode=9999 - a reference to the report in question action=exportcsv internally written in the function admin=myappuserid the third party operats SSL to access the site - then internal username and password stored in a database to access the functions of the site) password=myappuserpass startdate=2011-01-16 this and end data are parameters specific to the report 9999 enddate=2011-01-16 reportby=mypreferredview This is an option in the report that facilitates different levels of detail or aggregation The problem is that the reportby parameter is a radio button selection in a list of 5 selections (sure I enough the default is highest level of aggregation , I want the last one which is the most detailed) Here is a sample of the HTML page code for the options of reportby View by The Default My Least Preferred My Second Least Preferred My Third Least Preferred My Preferred No matter which of the reportby items I select in the wget statement - thedefault is always executed. Questions 1) Has anyone come across this notation in HTML (id=inputname[inputelement]) I spoke to a senior web developer and he has never seen this notation for inputs (id=inputname[inputelement]) - and w3schools do not appear familiar with this either based on an extensive search 2) Can a wget command select a none default radio item when executing the command ? This probably will be initially received with a "Use CURL" response- however the wget approach works very well in the limited environment I am operating in - particularly as I need to download 10000 of these such items. Thanks ahead of response

    Read the article

  • Wget a directory with exact filenames?

    - by kaspr
    The following works because I inserted the exact filename: wget --referer=http://www.*****.com --cookies=on --load-cookies=cookie.txt --keep-session-cookies --save-cookies=cookie.txt http://www.*****.com/doc/GG-15252252.html But if I just do it with the doc dir I will get a 403 error message: Connecting to www.*****.com|***.**.***.**|:**... connected. HTTP request sent, awaiting response... 403 Forbidden 2010-11-04 21:25:38 ERROR 403: Forbidden. So I can't list the dir, what can I do? Please help anybody and thanks :)!!

    Read the article

  • wget not working with domain on local machine

    - by user568829
    Basically - I have some PHP scripts that need to be run as cron jobs. Lets say the script needing to be run is: http://admin.somedomain.com/cron_jobs/get_stats If I run the script from the local machine it gives me a 404 Not Found error. So I entered the following into /etc/hosts XX.XX.XX.45 admin.somedomain.com Now wget works fine from the local machine to that domain. However when I restart Apache that domain no longer works. Here is the config for that site in /etc/apache2/sites-available NameVirtualHost XX.XX.XX.45:80 <VirtualHost XX.XX.XX.45:80> ServerName admin.somedomain.com DocumentRoot /var/www/admin.somedomain.com/ <Directory "/var/www/admin.somedomain.com"> allowoverride all Options Indexes order deny,allow allow from all </Directory> ErrorLog /var/log/apache2/admin.somedomain.com-error_log CustomLog /var/log/apache2/admin.somedomain.com-access_log combined </VirtualHost> It just goes to the default site config showing "It Works". If I take out that setting in /etc/hosts and restart apache the website at that domain works fine again. Can anyone point me in the right direction here? Thanks

    Read the article

  • using wget against protected site with NTLM

    - by Joey V.
    Trying to mirror a local intranet site and have found previous questions using 'wget'. It works great with sites that are anonymous, but I have not been able to use it against a site that is expecting username\password (IIS with Integrated Windows Authentication). Here is what I pass in: wget -c --http-user='domain\user' --http-password=pwd http://local/site -dv Here is the debug output (note I replaced some with dummy values obviously): Setting --verbose (verbose) to 1 DEBUG output created by Wget 1.11.4 on Windows-MSVC. --2009-07-14 09:39:04-- http://local/site Host `local' has not issued a general basic challenge. Resolving local... seconds 0.00, x.x.x.x Caching local = x.x.x.x Connecting to local|x.x.x.x|:80... seconds 0.00, connected. Created socket 1896. Releasing 0x003e32b0 (new refcount 1). ---request begin--- GET /site/ HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: local Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 401 Access Denied Server: Microsoft-IIS/5.1 Date: Tue, 14 Jul 2009 13:39:04 GMT WWW-Authenticate: Negotiate WWW-Authenticate: NTLM Content-Length: 4431 Content-Type: text/html ---response end--- 401 Access Denied Closed fd 1896 Unknown authentication scheme. Authorization failed.

    Read the article

  • Download HTML and Images with WGet without first few lines

    - by St. John Johnson
    I'm attempting to use wget with the -p option to download specific documents and the images linked in the HTML. The problem is, the site that is hosting the HTML has some non-html information preceding the HTML. This is causing wget to not interpret the document as HTML and doesn't search for images. Is there a way to have wget strip the first X lines and/or force searching for images? Example URL: http://www.sec.gov/Archives/edgar/data/13239/000119312510070346/ds4.htm First Lines of Content: <DOCUMENT> <TYPE>S-4 <SEQUENCE>1 <FILENAME>ds4.htm <DESCRIPTION>FORM S-4 <TEXT> <HTML><HEAD> <TITLE>Form S-4</TITLE> Last Lines of Content: </BODY></HTML> </TEXT> </DOCUMENT>

    Read the article

  • wget return downloaded filename

    - by Matthew
    I'm using wget in a php script and need to get the name of the file downloaded. For example, if I try <?php system('/usr/bin/wget -q --directory-prefix="./downloads/" http://www.google.com/'); ?> I will get a file called index.html in the downloads directory. The page will not always be google though, so I need to find out the name of the file that was downloaded. I'd like to have something like this: <?php //Does not work: $filename = system('/usr/bin/wget -q --directory-prefix="./downloads/" http://www.google.com/'); //$filename should contain "index.html" ?>

    Read the article

  • How to enable 'wget' to download the whole content of HTML with Javascript

    - by neversaint
    I have a site which I want to download using Unix wget. If you look at the source code and content of the file it contain section called SUMMARY. However after issuing a wget command like this: wget -O downdloadedtext.txt http://www.ncbi.nlm.nih.gov/IEB/Research/Acembly/av.cgi?db=mouse&c=gene&a=fiche&l=2610008E11Rik The content of the downdloadedtext.txt is incomplete and different with the source code of that site. For example it doesn't contain SUMMARY section. Is there a correct way to obtain the full content correctly? The reason I ask this because I want to automate the download from different values in that HTML.

    Read the article

  • wget: retrieving files recursively

    - by Shadow
    When using wget with the recursive option turned on I am getting an error message when it is trying to download a file. It thinks the link is a downloadable file when in reality it should just be following it to get to the page that actually contains the files that I want. wget -r -l 16 --accept=jpg website.com The error message is: .... since it should be rejected. This usually occurs when the website link it is trying to fetch ends with a sql statement. The problem however doesn't occur when using the very same wget command on that link. I want to know how exactly it is trying to fetch the pages. I guess I could always take a poke around the source although I don't know how messy the project is.

    Read the article

  • download a series of files automatically using command line/wget

    - by anasnakawa
    I have a case that I would like to trigger an automatic download for a list of 114 file (recitation) for each reader, for example if i want to download the recitations for a reader called abkr, the urls for the files will look like the following.. http://server6.mp3quran.net/abkr/001.mp3 http://server6.mp3quran.net/abkr/002.mp3 ... http://server6.mp3quran.net/abkr/113.mp3 http://server6.mp3quran.net/abkr/114.mp3 simply these are Quran recitations, so they are always have a total of 114 is there an easy way to loop that using command line on Windows ?

    Read the article

  • Authentication with wget

    - by Llistes Sugra
    I am currently accepting the parameters login and password in my servlet, but logs are storing this info when using wget (as long as it is GET method, and apache is in the middle) Instead of this I want to enhance my servlet's authentication accepting: wget --http-user=login --http-password=password http://myhost/myServlet How can I read, in my servlet, the server side, the login and the password user is sending, in java code?

    Read the article

  • Difference between Python urllib.urlretrieve() and wget

    - by jrdioko
    I am trying to retrieve a 500mb file using Python, and I have a script which uses urllib.urlretrieve(). There seems to some network problem between me and the download site, as this call consistently hangs and fails to complete. However, using wget to retrieve the file tends to work without problems. What is the difference between urlretrieve() and wget that could cause this difference?

    Read the article

  • Why is only one wget command working in my crontab?

    - by afEkenholm
    I wish to fetch content from a PHP script on my server two times a day, altering a query variable lang to set what language we want, and save this content in two language specific files. This is my crontab: */15 * * * * ~root/apache.sh > /var/log/checkapache.log 10 0 * * * wget -O /path/to/file-sv.sql "http://mydomain.com/path/?lang=sv" 11 0 * * * wget -O /path/to/file-en.sql "http://mydomain.com/path/?lang=en" The problem is that only the first wget command line is being executed (or to be precise: the only file that is being written is /path/to/file-sv.sql). If I switch the second and the third row, /path/to/file-en.sql gets written instead. The first line always runs as expected, no matter where it is. I then tried using lynx -dump "http://mydomain.com/path/?lang=xx" > /path/to/file-xx.sql to no avail; still only the first lynx line executed successfully. Even mixing wget and lynx did not change this! Getting kinda desperate! Am I missing something? There are thousands of articles on crontab (combined with) wget or lynx, but all seems to cover basic setups and syntax. Does anyone got a clue of what I am doing wrong? Thanks, Alexander

    Read the article

  • Using /dev/tcp instead of wget

    - by User1
    Why does this work: exec 3</dev/tcp/www.google.com/80 echo -e "GET / HTTP/1.1\n\n"&3 cat <&3 And this fail: echo -e "GET / HTTP/1.1\n\n" /dev/tcp/www.google.com/80 cat </dev/tcp/www.google.com/80 Is there a way to do it in one-line w/o using wget, curl, or some other library?

    Read the article

  • Passing $_GET or $_POST data to PHP script that is run with wget

    - by Matt
    Hello, I have the following line of PHP code which works great: exec( 'wget http://www.mydomain.com/u1.php /dev/null &' ); u1.php acts to do various types of maintenance on my server and the above command makes it happen in the background. No problems there. But I need to pass variable data to u1.php before it's executed. I'd like to pass POST data preferably, but could accommodate GET or SESSION data if POST isn't an option. Basically the type of data being passed is user-specific and will vary depending on who is logged in to the site and triggering the above code. I've tried adding the GET data to the end of the URL and that didn't work. So how else might I be able to send the data to u1.php? POST data preferred, SESSION data would work as well (but I tried this and it didn't pick up the logged in user's session data). GET would be a last resort. Thanks!

    Read the article

  • How would I write a terminal command to download a folder with wget from a Media Temple (gs) server?

    - by racl101
    I'm trying to download a folder using wget on the Terminal (I'm usin a Mac if that matters) because my ftp client sucks and keeps timing out. It doesn't stay connected for long. So I was wondering if I could use wget to connect via ftp protocol to the server to download the directory in question. I have searched around in the internet for this and have attempted to write the command but it keeps failing. So assuming the following: ftp username is: [email protected] ftp host is: ftp.s12345.gridserver.com ftp password is: somepassword I have tried to write the command in the following ways: wget -r ftp://[email protected]:[email protected]/path/to/desired/folder/ wget -r ftp://serveradmin:[email protected]/path/to/desired/folder/ When I try the first way I get this error: Bad port number. When I try the second way I get a little further but I get this error: Resolving s12345.gridserver.com... 71.46.226.79 Connecting to s12345.gridserver.com|71.46.226.79|:21... connected. Logging in as serveradmin ... Login incorrect. What could I be doing wrong?

    Read the article

  • How to use wget to grab copy of Google Code site documents?

    - by Alex Reynolds
    I have a Google Code project which has a lot of wiki'ed documentation. I would like to create a copy of this documentation for offline browsing. I would like to use wget or a similar utility. I have tried the following: $ wget --no-parent \ --recursive \ --page-requisites \ --html-extension \ --base="http://code.google.com/p/myProject/" \ "http://code.google.com/p/myProject/" The problem is that links from within the mirrored copy have links like: file:///p/myProject/documentName This renaming of links in this way causes 404 (not found) errors, since the links point to nowhere valid on the filesystem. What options should I use instead with wget, so that I can make a local copy of the site's documentation and other pages?

    Read the article

  • [bash] checking wget's return value [if]

    - by wwrob
    I'm writing a script to download a bunch of files, and I want it to inform when a particular file doesn't exist. r=`wget -q www.someurl.com` if [ $r -ne 0 ] then echo "Not there" else echo "OK" fi But it gives the following error on execution: ./file: line 2: [: -ne: unary operator expected What's wrong?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >