Search Results

Search found 593 results on 24 pages for 'wget'.

Page 5/24 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Syncing large personal school-material -git-repo with things such as casual notes? Rsync, wget and Git -- or some ready tool?

    - by hhh
    My friend wants to store electrically her school -notes and process them fast, with backups. She has over 2GB -size repo already and growing all the time (mostly appended material i.e. more school notes, different formats, pdf, pictures and scanned, some text -files, etc). The goal of my friend is to process fast the notes. I suggested command like this here i.e. "# crontab -e @weekly wget --random-wait -e robots=off -U mozilla -mirror http://VeryLong.com". But I think plugging in Rsync somewhere could make it much better with Git. How would you help my friend to process and store the school -material under Git-version-controlling and still keep the size reasonable? Perhaps related rsync .git directory rsync git big repository Different scope Git/rsync mix for projects with large binaries and text files What's a good way to organize a large collection of personal scripts using git?

    Read the article

  • the right options to traverse/download the pages/directories of a subdomain

    - by Lorraine Bernard
    Let's suppose exist a site with the following directories (subdomain) index.php |-sub1 |-index.php |-sub1sub1 |-index.php |-other.php |-sub1sub1sub1 |-sub2 |-index.php |- …. |-sub3 |- ... My question is: 1) how can I display properly locally the site of the sub1 subdomain (http://domain/sub1) 2) how can I get just the files and directory which are childs of sub1 (sub1sub1 and sub1sub1sub1 for example) I tried the following options (for wget) but it retrieves also the files and directories which are in sub2, sub3 etc.. wget -E -H -k -K -r http://domain/sub1/index.php

    Read the article

  • Over writing output to a text file

    - by Naveen Gamage
    I'm trying to write wget command's output to a text file, but it always appends to the text file. #!/bin/sh download() { local url=$1 echo -n " " wget --progress=dot $url 2>&1 | grep --line-buffered "%" | \ sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}' echo " DONE" } file="$1" echo -n "Downloading $file:" download "$file" > file.log I tried using using > won't work, where am I doing wrong?

    Read the article

  • Hudson authentication via wget is return http error 302

    - by Rafael
    Hello, I'm trying to make a script to authenticate in hudson using wget and store the authentication cookie. The contents of the script is this: wget \ --no-check-certificate \ --save-cookies /home/hudson/hudson-authentication-cookie \ --output-document "-" \ 'https://myhudsonserver:8443/hudson/j_acegi_security_check?j_username=my_username&j_password=my_password&remember_me=true' Unfortunately, when I run this script, I get: --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/j_acegi_security_check? j_username=my_username&j_password=my_password&remember_me=true Resolving myhudsonserver... 127.0.0.1 Connecting to myhudsonserver|127.0.0.1|:8443... connected. WARNING: cannot verify myhudsonserver's certificate, issued by `/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=myhudsonserver': Self-signed certificate encountered. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://myhudson:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F [following] --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F Reusing existing connection to myhudsonserver:8443. HTTP request sent, awaiting response... 404 Not Found 2011-02-03 13:39:29 ERROR 404: Not Found. There's no error log in any of hudson's tomcat log files. Does anyone has any idea about what might be happening? Thanks.

    Read the article

  • Hudson authentication via wget is return http error 302

    - by Rafael
    I'm trying to make a script to authenticate in hudson using wget and store the authentication cookie. The contents of the script is this: wget \ --no-check-certificate \ --save-cookies /home/hudson/hudson-authentication-cookie \ --output-document "-" \ 'https://myhudsonserver:8443/hudson/j_acegi_security_check?j_username=my_username&j_password=my_password&remember_me=true' Unfortunately, when I run this script, I get: --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/j_acegi_security_check? j_username=my_username&j_password=my_password&remember_me=true Resolving myhudsonserver... 127.0.0.1 Connecting to myhudsonserver|127.0.0.1|:8443... connected. WARNING: cannot verify myhudsonserver's certificate, issued by `/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=myhudsonserver': Self-signed certificate encountered. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://myhudson:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F [following] --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F Reusing existing connection to myhudsonserver:8443. HTTP request sent, awaiting response... 404 Not Found 2011-02-03 13:39:29 ERROR 404: Not Found. There's no error log in any of hudson's tomcat log files. Does anyone has any idea about what might be happening? Thanks.

    Read the article

  • How to scrape a _private_ google group?

    - by John
    Hi there, I'd like to scrape the discussion list of a private google group. It's a multi-page list and I might have to this later again so scripting sounds like the way to go. Since this is a private group, I need to login in my google account first. Unfortunately I can't manage to login using wget or ruby Net::HTTP. Surprisingly google groups is not accessible with the Client Login interface, so all the code samples are useless. My ruby script is embedded at the end of the post. The response to the authentication query is a 200-OK but no cookies in the response headers and the body contains the message "Your browser's cookie functionality is turned off. Please turn it on." I got the same output with wget. See the bash script at the end of this message. I don't know how to workaround this. am I missing something? Any idea? Thanks in advance. John Here is the ruby script: # a ruby script require 'net/https' http = Net::HTTP.new('www.google.com', 443) http.use_ssl = true path = '/accounts/ServiceLoginAuth' email='[email protected]' password='topsecret' # form inputs from the login page data = "Email=#{email}&Passwd=#{password}&dsh=7379491738180116079&GALX=irvvmW0Z-zI" headers = { 'Content-Type' => 'application/x-www-form-urlencoded', 'user-agent' => "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/6.0"} # Post the request and print out the response to retrieve our authentication token resp, data = http.post(path, data, headers) puts resp resp.each {|h, v| puts h+'='+v} #warning: peer certificate won't be verified in this SSL session Here is the bash script: # A bash script for wget CMD="" CMD="$CMD --keep-session-cookies --save-cookies cookies.tmp" CMD="$CMD --no-check-certificate" CMD="$CMD --post-data='[email protected]&Passwd=topsecret&dsh=-8408553335275857936&GALX=irvvmW0Z-zI'" CMD="$CMD --user-agent='Mozilla'" CMD="$CMD https://www.google.com/accounts/ServiceLoginAuth" echo $CMD wget $CMD wget --load-cookies="cookies.tmp" http://groups.google.com/group/mygroup/topics?tsc=2

    Read the article

  • Spider a Website and Return URLs Only

    - by Rob Wilkerson
    I'm not quite sure how best to define/articulate this, but I'm looking for a way to pseudo-spider a website. The key is that I don't actually want the content, but rather a simple list of URIs. I can get reasonably close to this idea with Wget using the --spider option, but when piping that output through a grep, I can't seem to find the right magic to make it work: wget --spider --force-html -r -l1 http://somesite.com | grep 'Saving to:' The grep filter seems to have absolutely no affect on the wget output. Have I got something wrong or is there another tool I should try that's more geared towards providing this kind of limited result set? Thanks. UPDATE So I just found out offline that, by default, wget writes to stderr. I missed that in the man pages (in fact, I still haven't found it if it's in there). Once I piped the return to stdout, I got closer to what I need: wget --spider --force-html -r -l1 http://somesite.com 2>&1 | grep 'Saving to:' I'd still be interested in other/better means for doing this kind of thing, if any exist.

    Read the article

  • Copying only the changed files while mirroring a website

    - by Rishi Verma
    I am using wget to mirror website using this code $ wget \ --recursive \ --no-clobber \ --page-requisites \ --html-extension \ --convert-links \ --restrict-file-names=windows \ --domains website.org \ --no-parent \ www.website.org/tutorials/html/ The next time I run it it starts downloading the same files again, however I want only the changed files to be downloaded next time. I am open to use any other tool or script(preferably PHP,Curl) apart from using wget.

    Read the article

  • How to backup blog running on posterous.com

    - by Martin Vobr
    I'd like to backup content of my blog which is powered by posterous.com. I'd like to save all texts and images to the local disk. Ability to browse it offline is a plus. What I've already tried: wget wget -mk http://myblogurl It downloads the first page with list of posts, then stops with "20 redirections exceeded" message. WinHttpTrack It downloads the first page with redirection to the www.posterous.com home page instead of real page content. Edit: The url of the site I'm trying to backup is blog.safabyte.net

    Read the article

  • HTTP downloads stop after some time, resuming is not possible

    - by cdauth
    When I try to download a file via HTTP, the downloads sometimes stop after around 30 MB. The download rates goes down to 0 B/s and no data keeps coming. When I stop the download and resume again, the download still hangs. But when I redownload it from byte 0 again, everything works fine up to 30 MB when it stops again. Sometimes, after some hours, it just works again without problems. The position in the file when the download stops is variable, but most of the time it is around 30–35 MB. As a download manager I use wget. The same behaviour happens though using curl and other download managers. The error occurs independently of the server I download from. I have also observed this error on other Linux computers in my network. All computers on my network run Gentoo Linux on x86. All internet connections on my network go through a server on my network which runs a transparent Squid proxy on port 80. That server is connected to a router, which is a Speedport W 700V by Deutsche Telekom AG. That router is connected to the internet using ADSL, with 448 kbit/s down speed and 96 kbit/s up speed. I am almost sure that my transparent proxy is not the problem. I turned that off without resolving the issue. I also connected to the router directly via WLAN without resolving the issue. I also tried to download over another port via HTTP. Furthermore, I tried to download the file using IPv6 with a gateway6 tunnel from my computer, which resulted in exactly the same problem. Now the strange thing is that everything works fine using FTP and HTTPS (also with wget on the same computer). Even more strange: when I resume the download that hanged over HTTP using FTP or HTTPS, download a few bytes that way, stop wget and then resume again using HTTP, it loads data again! But after a few MB, it may stop again. Unfortunately, files downloaded that way are always broken (the MD5 sum is not correct), so at some point, there must have been bogus data. I tried searching for HTML error messages in the downloaded file, but grep -i html does not find anything. (I cannot think of a way to search for GZIP-compressed HTML error messages in the file, so I did not try that.) I tried using strace on wget when it failed to resume a download, you can find the entire output on pastebin. The important lines are repeated every second: clock_gettime(CLOCK_MONOTONIC, {326102, 62176435}) = 0 ) = 1 write(2, "78% [++++++++++++++++++++++++++++"..., 19578% [+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ] 110,683,685 --.-K/s ) = 195 select(4, [3], NULL, NULL, {0, 949999}) = 0 (Timeout) I have absolutely no idea what could be the reason of this problem. It seems like whatever causes the issue speaks HTTP. It seems to speak HTTP that intelligently that it even regognises it in an IPv6-over-IPv4 tunnel. But what could that be and why does it only happen sometimes? The other possibility would be that there is a problem on my computer that is the same on other Gentoo Linux computers as well. Has anyone ever had such a problem? What could be the reason and where do I have to continue investigating to find out more about the issue? Update: I have just run into the problem again and tried to resume the download over the router’s WLAN, and this time it worked. Maybe I did something wrong during my last tests with the WLAN. Now maybe my transparent proxy server is in fact the problem. It is a very basic Squid proxy server that does not cache anything. Maybe the fact is interesting that a second Squid proxy runs on the same computer on another port. Update: A download hung again and this time I turned off all firewall settings and stopped all proxy servers. I failed to resume the download from my network server, which is directly connected to the router. So my proxy server definitely is not the cause the problem. I will try to upgrade the firmware of my router now, although I do not have admin access to it. I will see what I can do.

    Read the article

  • web spidering/crawling, can i do it or just search engines?

    - by bboyreason
    i already had a question answered about web-scraping with wget. but as i read a little more, i realize i may be looking for a web-crawling program. particularly the part about web-crawlers being able to get specific data like links or, in my case, products. all of the products on my site have the following naming convention, website.com/uniqueAlphaNumericID.html as far as i know, no dynamic content generation is being used and only one page per one item in the above format. should i just be thinking about: wget website.com | grep *.html or should i be looking into spiders/crawlers?

    Read the article

  • Hourly CRON task running more frequently than one hour

    - by Justin
    I have a cron task that calls a special PHP script via wget. Here is the crontab entry: 0 * * * * wget http://www.... It will work perfect for several days, running on the hour. However, after a few days the cron job will start to be called several times an hour. I have never seen CRON drift like this, so I imagine it can't really be a CRON issue. However, the logs of the script that is called clearly show it running several times an hour. Server details: Ubuntu Luci Apache MySQL PHP5 Time is showing correct @ command line Server is setup to sync with a NTP server In order for the script to run it must be passed a unique 50-character hash key in the URL, so this script isn't being called from any other source accidentally. What might cause CRON to drift like this?

    Read the article

  • Serve mirrored (static) web-page with original headers

    - by aioobe
    I have a dynamic webpage which I want to create a "frozen" copy of. Typically I would do something like wget -m http://example.com, and then put the files in the document root of the web-server. This site however has some dynamic content, including dynamically generated images, for instance http://example.com/company/123/logo This means that in order to mirror the page, I need to Save whatever headers the server currently serves for each URL. This can be done using the wget option --save-headers. Serve the static pages and serve the proper headers for each file. (This I have no idea of how to do.) What is the best way to solve this? Any suggestions are welcome.

    Read the article

  • User WinWget to keep web site alive in a Windows Server 2003

    - by Menelaos Vergis
    I have a site that must stay alive due to a service that runs and check a directory for changes. The site is running in IIS at a Windows Server 2003 and the solution I came up it that I will Schedule a task that requests the home page every 5 minutes. I am sure that this way the site will stay alive almost all the time. I have downloaded Wget from Wget from Windows and I have installed it at my windows server 2003 but I don't know how to use it in order to ping the server but not download anything. Since I want to use this forever I don't want to save anything on the disk, can you provide me with the command that pings a web page but don't save anything on the disk?

    Read the article

  • Stream tar.gz file from FTP server

    - by linker
    Here is the situation: I have a tar.gz file on a FTP server which can contain an arbitrary number of files. Now what I'm trying to accomplish is have this file streamed and uploaded to HDFS through a Hadoop job. The fact that it's Hadoop is not important, in the end what I need to do is write some shell script that would take this file form ftp with wget and write the output to a stream. The reason why I really need to use streams is that there will be a large number of these files, and each file will be huge. It's fairly easy to do if I have a gzipped file and I'm doing something like this: wget -O - "ftp://${user}:${pass}@${host}/$file" | zcat But I'm not even sure if this is possible for a tar.gz file, especially since there are mutliple files in the archive. I'm a bit confused on what direction to take for this, any help would be greatly appreciated.

    Read the article

  • Persistent retrying resuming downloads with curl

    - by Svish
    I'm on a mac and have a list of files I would like to download from an ftp server. The connection is a bit buggy so I want it to retry and resume if connection is dropped. I know I can do this with wget, but unfortunately Mac OS X doesn't come with wget. I could install it, but to do that (unless I have missed something) I need to install XCode and MacPorts first, which I would like to avoid. Curl is available though it seems, but I don't know how that works or how to use it really. If I have a list of files in a text file (one full path per line, like ftp://user:pass@server/dir/file1) how can I use curl to download all those files? And can I get curl to never give up? Like, retry infinitely and resume downloads where it left off and such?

    Read the article

  • Wget works, Ping doesn't

    - by derty
    There are some anomalies on a Virtuozzo virtualized Debian 4 (I know, I'm gonna upgrade this one asap, but there dependences). We run some Websites on this one. And a view Days ago exmi4 wasnt able to send mails to SOME people. I'll use live.com as exampledomain! So some of this people got mails and some didn't. Some of the mails got stuck in the queue, and after 2 days they went out!! My Nagios never showed problems with the internet connection or disk space Now i wanted to install "dig" to look how he's solving the dns request. And this Debian tells me he doesn't know dig.. Long story made short, Debian is able to download sites with exact IP or even with wget live.com, but it is not able to ping live.com. I'm 99% sure that the networking is right and the routing too! Some examples of my tring below: wget live.com downloads the site ping live.com ping http://www.live.com ping http://live.com returns: ping: unknown host live.com EDIT: i now use heise.de not live.com any more. and i found out i can ping the heise.de server by using it's IP-address. myserver:~# ping 193.99.144.85 PING 193.99.144.85 (193.99.144.85) 56(84) bytes of data. 64 bytes from 193.99.144.85: icmp_seq=1 ttl=248 time=12.7 ms 64 bytes from 193.99.144.85: icmp_seq=2 ttl=248 time=12.6 ms 64 bytes from 193.99.144.85: icmp_seq=3 ttl=248 time=12.9 ms 64 bytes from 193.99.144.85: icmp_seq=4 ttl=248 time=13.1 ms 64 bytes from 193.99.144.85: icmp_seq=5 ttl=248 time=13.1 ms --- 193.99.144.85 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 12.671/12.924/13.163/0.238 ms EDIT 2: myserver:/etc/apt# dig heise.de ; <<>> DiG 9.3.4-P1.2 <<>> heise.de ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 3 ;; QUESTION SECTION: ;heise.de. IN A ;; ANSWER SECTION: heise.de. 2266 IN A 193.99.144.80 ;; AUTHORITY SECTION: heise.de. 1622 IN NS ns.pop-hannover.de. heise.de. 1622 IN NS ns.s.plusline.de. heise.de. 1622 IN NS ns.plusline.de. heise.de. 1622 IN NS ns2.pop-hannover.net. heise.de. 1622 IN NS ns.heise.de. ;; ADDITIONAL SECTION: ns.plusline.de. 265 IN A 212.19.48.14 ns.pop-hannover.de. 5113 IN A 193.98.1.200 ns2.pop-hannover.net. 15150 IN A 62.48.67.66 ;; Query time: 2 msec ;; SERVER: 193.200.112.80#53(193.200.112.80) ;; WHEN: Tue Oct 9 13:03:50 2012 ;; MSG SIZE rcvd: 216

    Read the article

  • wget not respecting my robots.txt. Is there an interceptor?

    - by Jane Wilkie
    I have a website where I post csv files as a free service. Recently I have noticed that wget and libwww have been scraping pretty hard and I was wondering how to circumvent that even if only a little. I have implemented a robots.txt policy. I posted it below.. User-agent: wget Disallow: / User-agent: libwww Disallow: / User-agent: * Disallow: / Issuing a wget from my totally independent ubuntu box shows that wget against my server just doesn't seem to work like so.... http://myserver.com/file.csv Anyway I don't mind people just grabbing the info, I just want to implement some sort of flood control, like a wrapper or an interceptor. Does anyone have a thought about this or could point me in the direction of a resource. I realize that it might not even be possible. Just after some ideas. Janie

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • Download Sun Studio via CLI

    - by ramesh.mimit
    Can anybody please guide me how to download the sun studio from CLI. I was using wget and lynx programs but not worked. As I have only SSH access to my server and I cant not download it on local machine and upload it on server, will be bad option for me as it will take hours to upload. Sun Studio download requires registration + authentication. I have both but not sure how to include those options while downloading via CLI.

    Read the article

  • KeyPass - KeyPassHttp refuses to be recognised as a plugin

    - by wonea
    I'm trying to make KeyPass run on my Windows 7 machine, I've downloaded and installed KeePass, aim to use it alongside Passifox However, downloading and copying the KeePassHttp executable to the C:\Program Files (x86)\KeePass Password Safe folder, however it refuses to show up in the KeyPass plugins window. Please help, I've tried download KeePassHttp using multiple links from github and passifox itself using Firefox and even wget. Also I've tried pinging http://localhost:19455 but nothing was found. Any ideas I'm at a loss.

    Read the article

  • Fast (non-blocking) way to transfer many files to another server

    - by Nyxynyx
    I am currently attempting to transfer over 1 million files from one server to another. Using wget, it seems to be extremely slow, probably because it starts a new transfer after the previous one has been completed. Question: Is there a faster non-blocking (asynchronous) way to do the transfer? I do not have enough space on the first server to compress the files into tar.gz and transferring them over. Thanks!

    Read the article

  • Galera install failure on Fedora 18

    - by ehime
    I've been trying to reinstall MariaDB and have been encountering multiple issues, $ yum install Mariadb-Galera-server Error: Package: MariaDB-Galera-server-5.5.29-1.i386 (mariadb) Requires: galera Available: galera-23.2.4-1.rhel5.i386 (mariadb) galera galera = 23.2.4-1.rhel5 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest there is a requirement that libssl.so.6 and libcrypto.ssl.6 are installed, these DO show up in my /lib64 and /lib though as linked items. /usr/lib -rwxr-xr-x 1 root root 1356700 Nov 23 2010 libcrypto.so.0.9.8e lrwxrwxrwx 1 root root 19 Jun 28 12:03 libcrypto.so.6 -> libcrypto.so.0.9.8e -rwxr-xr-x. 1 root root 394272 Mar 18 14:22 libssl.so.1.0.1e lrwxrwxrwx 1 root root 16 Jun 28 12:03 libssl.so.6 -> libssl.so.0.9.8e /usr/lib64 -rwxr-xr-x 1 root root 1849680 Mar 18 14:21 libcrypto.so.1.0.1e lrwxrwxrwx 1 root root 26 Jun 28 11:54 libcrypto.so.6 -> /lib64/libcrypto.so.1.0.1e -rwxr-xr-x 1 root root 421712 Mar 18 14:21 libssl.so.1.0.1e lrwxrwxrwx 1 root root 23 Jun 28 11:54 libssl.so.6 -> /lib64/libssl.so.1.0.1e So the deps SHOULD be met, trying to $ yum install galera returns this Resolving Dependencies --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Restarting Dependency Resolution with new changes. --> Running transaction check ---> Package galera.i386 0:23.2.4-1.rhel5 will be installed --> Finished Dependency Resolution No errors? but no install either .... ? lets try wget and rpm'ing the package instead I guess? $ wget https://launchpad.net/galera/2.x/23.2.4/+download/galera-23.2.4-1.rhel5.x86_64.rpm $ rpm -ivh galera-23.2.4-1.rhel5.x86_64.rpm This issues the dreaded error: Failed dependencies: libcrypto.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 libssl.so.6()(64bit) is needed by galera-23.2.4-1.rhel5.x86_64 But we saw above these packages are here =( Whats going on?? Is openssl not installed? $ yum install openssl Loaded plugins: langpacks, presto, refresh-packagekit Package 1:openssl-1.0.1e-4.fc18.x86_64 already installed and latest version Nothing to do Its there.... ??? wth Fedora?

    Read the article

  • How would I `wget` files and then save them by date downloaded rather than filename?

    - by searchfgold6789
    My goal: To download 131 JPEGs and save them in a file name format that is relative to the date/time format rather than their file name. I have already tried things that involve changing the files' names after they have already been downloaded. However, these methods do not work because it seems like exif data is not being kept. For example: jhead -n%Y%m%d-%H%M%S *.jpg just returns a bunch of errors saying: Possible new names for for '{filename}.jpg' already exist File '{filename}.jpg' contains no exif date stamp. Using file date Usually, as in this case, I wind up with less files than I started out with. So is there some command I can pass to wget instead? I have already tried the --timestamp option with no success. (The man page is not to clear about what that does.)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >