Search Results

Search found 36619 results on 1465 pages for 'damn small linux'.

Page 68/1465 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • Linux arp cache timeout values

    - by Jak
    I'm trying to configure sane values for the Linux kernel arp cache timeout, but I can't find a detailed explanation as to how they work anywhere. Even the kernel.org documentation doesn't give a good explanation, I can only find recommended values to alleviate overflow. Here is an example of the values I have: net.ipv4.neigh.default.gc_thresh1 = 128 net.ipv4.neigh.default.gc_thresh2 = 512 net.ipv4.neigh.default.gc_thresh3 = 1024 Now, from what I've gathered so far: gc_thresh1 is the number of arp entries allowed before the garbage collector starts removing any entries at all. gc_thresh2 is the soft-limit, which is the number of entries allowed before the garbage collector actively removes arp entries. gc_thresh3 is the hard limit, where entries above this number are aggressively removed. Now, if I understand correctly, if the number of arp entries goes beyond gc_thresh1 but remains below gc_thresh2, the excess will be removed periodically with an interval set by gc_interval. My question is, if the number of entries goes beyond gc_thresh2 but below gc_thresh3, or if the number goes beyond gc_thresh3, how are the entries removed? In other words, what does "actively" and "aggressively" removed mean exactly? I assume it means they are removed more frequently than what is defined in gc_interval, but I can't find by how much.

    Read the article

  • Virtual IPv6 Network between VirtualBox VMs

    - by Ben
    I'm trying to create a virtual IPv6 network as a test environment. I have 5 VirtualBox VMs (Ubuntu Server) with network adapters using host-only networking. You can imagine them being connected in series and every machine connects 2 subnets. I want to ping the last machine from the first one: On: 2001:db8:aaaa::100 I want to ping 2001:db8:dddd::101 (Note: there is no cccc network in between) Only static configuration and routes are used: /etc/network/interfaces auto eth0 iface eth0 inet6 static address 2001:db8:aaaa::100 netmask 64 /etc/network/interfaces auto eth0 iface eth0 inet6 static address 2001:db8:aaaa::101 netmask 64 auto eth1 iface eth1 inet6 static address 2001:db8:bbbb::100 netmask 64 up ip -6 route add 2001:db8:dddd::/64 via 2001:db8:bbbb::101 dev eth1 down ip -6 route del 2001:db8:dddd::/64 via 2001:db8:bbbb::101 dev eth1 I thought there might be some automatic route discovery going on. Anyway, ping6 2001:db8:dddd::100 will not work from aaaa::100 When I add the route: ip -6 route add 2001:db8:dddd::/64 via 2001:db8:aaaa::101 it will work. But the next interface in the same network dddd::101 is not reachable. How could that be? There is a machine with an interface bbbb::101 and another dddd::100 and I can ping the latter one, but the machine connected to it, dddd::101 not?? I also have also turned on forwarding. Any ideas?

    Read the article

  • Caching/preloading files on Linux into RAM

    - by Andrioid
    I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

  • configure a Macbook Pro to use external monitor at boot (Debian Linux)

    - by Eric
    In the spirit of reuse, I've installed Debian (version 6.0.5 "squeeze") on my wife’s old Macbook Pro (circa 2009 or so), to repurpose it for various tasks. The catch is the display is flaky. It will last a random amount of time, between 2 minutes and 2 hours, before freezing and graying out. This is a known issue with that generation of MBP. Fortunately it’s no problem for me, as I plan to use it with an external monitor anyway. Which brings us to the problem: How do I configure this thing to output to the external display by default, and hopefully disable the built-in LCD? The ideal solution would be to modify a setting in the EFI (BIOS), but I’m not holding out much hope for that. Next best thing would be a kernel option I can pass to the NVIDIA driver. What won’t work is a solution that doesn’t give me a display until X starts. I need to have console access, especially given that the built-in LCD is dying, and any day now might give out completely. So far I haven’t been able to find anything online. lspci says I’ve got an NVIDIA GeForce 9400M Help is much appreciated! Eric PS if this question is better suited to the Unix & Linux area, pls advise and I will move it.

    Read the article

  • SAFE MODE Restriction in effect. The script not allowed to access directory owned by uid

    - by user57221
    I am running a dedicated server with multiple websites. I have created a global directory for common scripts for all websites, rather than repeating them in every website directory. How can I make this global directory accessible for all website. I am getting following error. Warning: require_once() [function.require-once]: SAFE MODE Restriction in effect. The script whose uid is XXXX is not allowed to access /vhosts/globallibrary/Zend/Application.php owned by uid XXXX I have change the ownership of global directory for X website. so it works fine for X website. latter I added another website Y Now I am getting the same error again. If I change the CHOWN for Y website then X website will have the same error. I don't want to disable the safemode restriction. Is there a work around, so that this global dir will be accessible by all website. I am getting following error in my browser when I try to access global directory. Global directory is on same level as all other websites. Is this a good practice to enable safemode for websites?

    Read the article

  • Linux: Send mail to external mail box from a server with that user's hostname

    - by dtbarne
    I've got sendmail running on a linux box. Let's say the hostname of the box is bar.com. If I run the following command, I don't receive the email (which is hosted with a third party), presumably due to the hostname pointing to the local machine. echo "Test Body" | mail -s "Test Subject" [email protected] Is there any way to get this to work so that I can receive emails at my third party email address even though it has the same hostname? Do I have to change the hostname of this server (not preferred)? It may be worth noting that I created a user "foo" on my machine and noticed that the mailbox for that account is empty. I noticed these log entries, which may or may not be relevant: Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: from=apache, size=80, class=0, nrcpts=1, msgid=<[email protected]>, relay=apache@localhost Jun 28 01:09:48 bar sendmail[14339]: p5S59mIA014339: from=<[email protected]>, size=293, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.$ Jun 28 01:09:48 bar sendmail[14338]: p5S59min014338: [email protected], ctladdr=apache (48/48), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30080, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (p5S59mIA$ Jun 28 01:09:48 bar sendmail[14340]: p5S59mIA014339: to=<[email protected]>, ctladdr=<[email protected]> (48/48), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30495, dsn=2.0.0, stat=Sent

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Route web traffic through a separate iterface

    - by tkane
    I'd like to route web traffic through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Linux will not activate wireless after device has been re-enabled

    - by XHR
    Using a Eee 900A netbook by Asus. By pressing Fn + F2, I can disable or enable the wireless chip on the netbook, a blue LED indicates the status. I've been able to connect to wireless networks just fine with this netbook. However, if the wireless chip ever becomes disabled, I have to reboot to get my network connection back. This generally happens when suspending. For some reason the LED will be off and I have to hit Fn + F2 for it to light up again. However, after doing so, Linux will not reconnect to the network. It simply changes the wireless status from "wireless is disabled" to "device not ready". Even worse, I've recently had issues with the chip being enabled at boot, thus making it nearly impossible to get connected. I've searched around on-line but haven't found much of anything useful on this. This happens on all kinds of different distros including Ubuntu 9.10 Netbook, EeeBuntu 4 beta, Jolicloud and Ubuntu 10.04 Netbook. Edit I noticed this question is getting a lot of views. To give a quick update, I never did resolve this issue with the given distro's. However, I'm currently running Ubuntu 10.10 netbook edition and this issue has gone away.

    Read the article

  • Why have we got so many Linux distributions? [closed]

    - by nebukadnezzar
    Pointed to from an answer to another question, I came across this graphic, and I'm shocked how many linux distributions currently exist. However, it seems that most of these distributions are forks of already popular distributions with minimal changes, usually limited to themes, wallpapers and buttons. It would still seem easier to create a sub-distribution with the required changes, such as XUbuntu with XFCE4, KUbuntu with KDE4, Fluxbuntu with Fluxbox, etc. In my mind there are a number of problems with having so many distributions - perhaps less security/stability due to smaller group of developers, and also the confusingly vast range of choice for newcomers to Linux. Some reasons that developers might decide to fork are: Specializing on a particular topic (work-related topic - i.e. for a Hospital, etc) An exceptional architecture, that requires a special set of software Use of non-FOSS, proprietary technology, and such So what other reasons are there that have caused so many people decided to create their own distributions? What are the thought processes that have led to this? And are these "valid" reasons - do we need so many distributions? If you can back your experiences up with references that would be great.

    Read the article

  • How to connect via SSH to a linux mint system that is connected via OpenVPN

    - by Hilyin
    Is there a way to make SSH port not get sent through VPN so when my computer is connected to a VPN, it can still be remoted in via SSH from its non-VPN IP? I am using Mint Linux 13. Thank you for your help! This is the instructions I followed to setup the VPN: Open Terminal Type: sudo apt-get install network-manager-openvpn Press Y to continue. Type: sudo restart network-manager Download BTGuard certificate (CA) by typing: sudo wget -O /etc/openvpn/btguard.ca.crt http://btguard.com/btguard.ca.crt Click on the Network Manager icon, expand VPN Connections, and choose Configure VPN A Network Connections window will appear with the VPN tab open. Click Add. 8. A Choose A VPN Connection Type window will open. Select OpenVPN in the drop-down menu and click Create.. . In the Editing VPN connection window, enter the following: Connection name: BTGuard VPN Gateway: vpn.btguard.com Optional: Manually select your server location by using ca.vpn.btguard.com for Canada or eu.vpn.btguard.com for Germany. Type: select Password User name: username Password: password CA Certificate: browse and select this file: /etc/openvpn/btguard.ca.crt Click Advanced... near the bottom of the window. Under the General tab, check the box next to Use a TCP connection Click OK, then click Apply. Setup complete! How To Connect Click on the Network Manager icon in the panel bar. Click on VPN Connections Select BTGuard VPN The Network Manager icon will begin spinning. You may be prompted to enter a password. If so, this is your system account keychain password, NOT your BTGuard password. Once connected, the Network Manager icon will have a lock next to it indicating you are browsing securely with BTGuard.

    Read the article

  • Storage drives is causting system crash

    - by Chad
    I'm running Centos 5.4 with 750GB(ntfs) and 2TB drives for storage. Originally I installed the 750, everything seemed fine and then I installed the 2TB drive with NTFS already partitioned. I noticed when I would copy a lot of videos it would crash (no mouse or response from server) about 20min into it. After doing some troubleshooting I noticed the 750 would also crash when doing the same task so I decided that NTFS may be the problem. I unmounted the 2TB drive and tried to partition and format it using ext2 but when using parted it would crash at this point "writing inode tables". Looking at the dmesg logs I believe this is the error "mtrr: type mismatch for e0000000,10000000 old: write-back new: write-combining". Any idea as to what could be causing this?

    Read the article

  • How to Set Up an SMTP Submission Server on Linux

    - by Kevin Cox
    I was trying to set up a mail server with no luck. I want it to accept mail from authenticated users only and deliver them. I want the users to be able to connect over the internet. Ideally the mail server wouldn't accept any incoming mail. Essentially I want it to accept messages on a receiving port and transfer them to the intended recipient out port 25. If anyone has some good links and guides that would be awesome. I am quite familiar with linux but have never played around with MTA's and am currently running debian 6. More Specific Problem! Sorry, that was general and postfix is complex. I am having trouble enabling the submission port with encryption and authentication. What Works: Sending mail from the local machine. (sendmail [email protected]). Ports are open. (25 and 587) Connecting to 587 appears to work, I get a "need to starttls" warning and starttls appears to work. But when I try to connect with the next command I get the error below. # openssl s_client -connect localhost:587 -starttls smtp CONNECTED(00000003) depth=0 /CN=localhost.localdomain verify error:num=18:self signed certificate verify return:1 depth=0 /CN=localhost.localdomain verify return:1 --- Certificate chain 0 s:/CN=localhost.localdomain i:/CN=localhost.localdomain --- Server certificate -----BEGIN CERTIFICATE----- MIICvDCCAaQCCQCYHnCzLRUoMTANBgkqhkiG9w0BAQUFADAgMR4wHAYDVQQDExVs b2NhbGhvc3QubG9jYWxkb21haW4wHhcNMTIwMjE3MTMxOTA1WhcNMjIwMjE0MTMx OTA1WjAgMR4wHAYDVQQDExVsb2NhbGhvc3QubG9jYWxkb21haW4wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDEFA/S6VhJihP6OGYrhEtL+SchWxPZGbgb VkgNJ6xK2dhR7hZXKcDtNddL3uf1YYWF76efS5oJPPjLb33NbHBb9imuD8PoynXN isz1oQEbzPE/07VC4srbsNIN92lldbRruDfjDrAbC/H+FBSUA2ImHvzc3xhIjdsb AbHasG1XBm8SkYULVedaD7I7YbnloCx0sTQgCM0Vjx29TXxPrpkcl6usjcQfZHqY ozg8X48Xm7F9CDip35Q+WwfZ6AcEkq9rJUOoZWrLWVcKusuYPCtUb6MdsZEH13IQ rA0+x8fUI3S0fW5xWWG0b4c5IxuM+eXz05DvB7mLyd+2+RwDAx2LAgMBAAEwDQYJ KoZIhvcNAQEFBQADggEBAAj1ib4lX28FhYdWv/RsHoGGFqf933SDipffBPM6Wlr0 jUn7wler7ilP65WVlTxDW+8PhdBmOrLUr0DO470AAS5uUOjdsPgGO+7VE/4/BN+/ naXVDzIcwyaiLbODIdG2s363V7gzibIuKUqOJ7oRLkwtxubt4D0CQN/7GNFY8cL2 in6FrYGDMNY+ve1tqPkukqQnes3DCeEo0+2KMGuwaJRQK3Es9WHotyrjrecPY170 dhDiLz4XaHU7xZwArAhMq/fay87liHvXR860tWq30oSb5DHQf4EloCQK4eJZQtFT B3xUDu7eFuCeXxjm4294YIPoWl5pbrP9vzLYAH+8ufE= -----END CERTIFICATE----- subject=/CN=localhost.localdomain issuer=/CN=localhost.localdomain --- No client certificate CA names sent --- SSL handshake has read 1605 bytes and written 354 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: E07926641A5EF22B15EB1D0E03FFF75588AB6464702CF4DC2166FFDAC1CA73E2 Session-ID-ctx: Master-Key: 454E8D5D40380DB3A73336775D6911B3DA289E4A1C9587DDC168EC09C2C3457CB30321E44CAD6AE65A66BAE9F33959A9 Key-Arg : None Start Time: 1349059796 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN read:errno=0 If I try to connect from evolution I get the following error: The reported error was "HELO command failed: TCP connection reset by peer".

    Read the article

  • Linux file server for an inexperienced admin

    - by Pat
    A charity I volunteer for wants a file server for their mostly Windows machines (about five XP and 7 machines, with some Mac laptops every now and then). For the server, I have a PC with an Intel Core 2 Duo 3GHz proc, 4GB of DDR2 400MHz RAM, and a 500 GB HDD. (I should point out that they do not currently have any server - they are just using Windows to share a folder on one of the PCs.) What is a linux distro that is easy to configure for Windows file serving yet stable and secure enough to protect sensitive data without an expert sysadmin? I'm guessing that a Debian distro would probably fit the security bill, but I don't know of any tailored to novice sysadmins. Also, are there any killer apps for making this easy to administer and set up (as a Windows file server, in particular - this answer is a good example)? Would FreeNAS be sufficient? Once it's all set up, what are the minimum measures I need to take to keep the data secure? I found this somewhat helpful answer, but it's not specific to my question of just getting a secure file server up, running, and maintained.

    Read the article

  • Route web browsing through a separate iterface

    - by tkane
    I'd like to route web browsing through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • TCP dies on a Linux laptop

    - by Roman Cheplyaka
    Once in several days I have the following problem. My laptop (Debian GNU/Linux testing) suddenly becomes unable to work with TCP connections to the internet. The following things continue to work fine: UDP (DNS), ICMP (ping) — I get instant response TCP connections to other machines in the local network (e.g. I can ssh to a neighbour laptop) everything is ok for other machines in my LAN But when I try TCP connections from my laptop, they time out (no response to SYN packets). Here's a typical curl output: % curl -v google.com * About to connect() to google.com port 80 (#0) * Trying 173.194.39.105... * Connection timed out * Trying 173.194.39.110... * Connection timed out * Trying 173.194.39.97... * Connection timed out * Trying 173.194.39.102... * Timeout * Trying 173.194.39.98... * Timeout * Trying 173.194.39.96... * Timeout * Trying 173.194.39.103... * Timeout * Trying 173.194.39.99... * Timeout * Trying 173.194.39.101... * Timeout * Trying 173.194.39.104... * Timeout * Trying 173.194.39.100... * Timeout * Trying 2a00:1450:400d:803::1009... * Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable * Success * couldn't connect to host * Closing connection #0 curl: (7) Failed to connect to 2a00:1450:400d:803::1009: Network is unreachable Restarting the connection and/or reloading the network card kernel module doesn't help. The only thing that helps is reboot. Clearly something is wrong with my system (everything else works fine), but I have no idea what exactly. I don't know how to reproduce this, but as I said, it happens every several days. My setup is a wireless router that is connected to the ISP via PPPoE. Any advice?

    Read the article

  • Cant see my wireless internal card on BackTrack 5

    - by Tomer
    I have BackTrack 5 R3 installed on VMware and its on bridged connection so it gets its own ip on my network and it works I get internet connection but there is no ethernet cable connected yet somehow when I do iwconfig I cant see wlan0 and no other wireless card but etho is connected to the network somehow.... cant it be that eth0 is my wireless card which somehow misconfigured? Its an intel centrino advanced n 6205

    Read the article

  • Starting programs from terminal then exiting terminal exits started programs?

    - by sherrellbc
    I really was unsure how to phrase the question title. What I mean is that when I use the terminal to start a program, most of the time when the terminal is closed it also exits the programs started from it. Now this makes sense if we look at it from a hierarchical standpoint of the terminal being the parent process which spawns child processes, and any halt of the parent causes subsequent halting of the children as well. However, I've noticed this to not always be the case. For example, I downloaded Sublime Text Editor and created a symlink in PATH. I can start this program by issuing a sublime command from the terminal, but subsequent closure of the terminal program does nothing to sublime. However, other times either the child process that was started it also closed or it hangs up and causes problems. tl;dr: Is it always the case that programs started from a closed parent process will be closed when the parent is exited? And if so, is there way to start a program from the terminal and then close the terminal without exiting the started process? The whole point here is to start programs from the terminal so I do not overly-populate my desktop with symlinks.

    Read the article

  • Install newer version of GCC in Knoppix

    - by Z boson
    I have Knoppix 7.30 installed on a USB with a persistent file. It comes with GCC 4.7.2. I would like to install GCC 4.9.x or 4.8.x. Being that Knopppix is based on Debian I would normally do something like apt-get upgrade But as far as I understand it's not recommended to do this with Knoppix. Warning: apt-get upgrade is a BAD IDEA. It will, quite probably, render your KNOPPIX remaster unbootable, or broken in some way. A far safer method is to only upgrade packages as necessary. I can say from experience that that is the cse. So how should I go about installing GCC 4.9? Another option would be to "remaster" knoppix to use GCC 4.9 by default rather than install it with a persistent file. I would be happy with either solution.

    Read the article

  • Anyone can suggest some Game Frameworks for GNU/Linux? [closed]

    - by dysoco
    So I've been developing a little bit with XNA + C# in Windows, not really much: just some 2D stuff, but I've found that XNA is a really good framework. I'm a GNU/Linux user, and I'm definitely migrating my desktop to Gentoo Linux (I've been using Arch in my laptop for a while now). But, of course, I need a C# + XNA alternative... I'm not really an expert in any language, so I can really pick up anything (except, maybe, Functional ones), I prefer C-Like languages like Java or Ruby, I tried Python but found the Whitespace syntax confusing. I would like to hear some of you'r suggestions, I'm not asking for "the best", but for "some suggestions", so I think this is objective enough. Probably you're going to suggest C++ + SDL, but I would prefer something more "High Level" like XNA, but I'm open to discuss anything. So... any ideas ? Note: I think this questions meets the guidelines for this site, if it doesn't: please not only downvote this question, but comment on what can I do to improve it. Thanks. PS: 2D Games, not 3D

    Read the article

  • Stronger laptop_mode in Linux

    - by Vi
    Can I have stronger laptop mode in Linux? I want to spin down the hard drive and prevent it to spin up even if something wants to read something not in cache. In general I want to have these modes: Normal Current laptop mode Stronger laptop mode: spin up only when needs to read something uncached (and cache it). No spinups to write something unless really memory pressure (Exception: explicit "sync" command in console). Kernel is allowed to keep processes in D-sleep for 10 seconds for that. Forced laptop mode: do not spin up, period. Keep offending processes in D-sleep unless I turn off this mode. Like there is a bomb instead of hard drive. I also want to have access times tracked (mount -o atime), but I don't want the hard drive to be spinned up only to update them. Is there some settings or kernel patches that can get closer to this? May be I should write special io scheduler for "forced laptop mode"? E.g. echo suspend > /sys/block/sda/queue/scheduler to lock the drive and echo cfq > /ys/block/sda/queue/scheduler to unlock it again?

    Read the article

  • Howto align partitions in Linux + NetApp

    - by santisaez
    NetApp support has suggested us aligning partitions to improve performance, in short: starting sector must be divisible by 8. How can I move the start point in a misaligned partition -in production, with ext3- under Linux? A screenshot with a misaligned (start=63s) and aligned (start=64s) partition is available at: http://filesocial.com/lkwvvn2 (If anyone is interested in this topic, NetApp has a good document explaining performance issues in misaligned partitions, search for "tr-3747": Best Practices for File System Alignment in Virtual Environments.) I have tried using parted "resize + move" commands, but when moving start point a get this error: (parted) resize Partition number? 1 Start? [64s]? End? [419425019s]? 419425018 (parted) move Partition number? 1 Start? 65 End? [419425019s]? 419425019 Error: Can't move a partition onto itself. Try using resize, perhaps? Using fdisk 'b' command in expert mode ('move beginning of data in a partition') works, but it doesn't move the file system.. thanks!!

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >