Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 458/1051 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Change ip route metric

    - by notphunny
    I'm constantly switching between eth0 and wlan0 interface on my archlinux because I often change OpenWrt fw images on my second router (which isn't connected anywhere). So I have problem with my routes when I'm connected to my wlan and want to connect with Ethernet to my router. Both routers are on 192.168.1.1/24 and after connecting to my Ethernet profile eth0 route becomes default one (which is ok for the time), because of smaller metric I guess. So I'm interested, how can I change routes metric so my applications can be connected to the internet (through out wlan)? Maybe there is solution not to use Default Gateway on Ethernet profile, however I still want to know how to change metric. Or default route if there are more then one.

    Read the article

  • Allow SFTP in iptables

    - by Kevin Orriss
    I have just purchased a VPS from linode and am going through the setup guide. I have everything running (apache2, php, mysql etc) but I am being denied access via SFTP when using fileZilla to upload a file. Now this is my second time installing the server as I missed a section out the first time. I was able to connect to my server through SFTP on filezilla the first time and the thing I missed out was adding a new user and editing the iptables in the firewall. So it would seem that the guide I have been following has blocked SFTP but allowed SSH. Here is the iptables file: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allow SSH connections # # The -dport number should be the same port number you set in sshd_config # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT All I would like is a line I need to put in there which allows SFTP over port 22. Thank you for reading this.

    Read the article

  • How to retry connections with wget?

    - by Andrei
    I have a very unstable internet connection, and sometimes have to download files as large as 200 MB. The problem is that the speed frequently drops and sits at --, -K/s and the process remains alive. I thought just to send some KILL signals to the process, but as I read in the wget manual about signals it doesn't help. How can I force wget to reinitialize itself and pick the download up where it left off after the connection drops and comes back up again? I would like to leave wget running, and when I come back, I want to see it downloading, and not waiting with speed --,-K/s.

    Read the article

  • How can I check for a string match AND an empty file in the same if/then bash script statement?

    - by Mike B
    I'm writing a simple bash script to do the following: 1) Check two files (foo1 and foo2). 2) If foo1 is different from foo2 and foo1 NOT blank, send an email. 3) If foo1 is the same as foo2... or foo1 is blank... do nothing. The blank condition is what's confusing me. Here's what I've got to start with: diff --brief <(sort ./foo1) <(sort ./foo2) >/dev/null comp_value=$? if [ $comp_value -ne 0 ] then mail -s "Alert" [email protected] <./alertfoo fi Obviously this doesn't check for blank contents. Any thoughts on how to do that?

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • How to get a list of Dovecot IMAP users

    - by Colt McCormack
    How do you get a list of users for a dovecot email server that connect via IMAP (as opposed to POP)? Our server is setup to authenticate via LDAP/PAM. Is there an easy way to get a list of the users who are accessing their mail via IMAP, rather than POP? I am about to migrate our server to Google Apps and want to migrate all of the mail for my IMAP users only (couple hundred out of several hundred total users). POP mail will be migrated separately from the client end obviously. I would much rather migrate only the IMAP users rather than the whole domain which would include migrating a bunch of POP mail left in the server that has already been read/sorted/deleted in the client's email program. Migrating all of that extra useless leftover POP mail could waste weeks of migration time. I suppose parsing some logs to see who has connected on an IMAP port (995 or 993) would give me a list would work if someone could help me do that. I know I have the raw dovecot logs, but am hoping for a cleaner solution.

    Read the article

  • Disaster - partitions lost, data seemingly alive, how to recover?

    - by a2h
    I've used TestDisk and it's written my old partition structure of a ~20GiB partition for Vista, ~25GiB partition for 7 (but it now shows up as unallocated) and a ~400GiB partition for documents. What it's meant to be is a 30GiB partition for 7, some unallocated space, and a ~400GiB partition for documents. So currently, I have access to all my documents, but not any of the programs I've installed on C:, or AppData, because my boot partition is now supposedly a 20GB vista partition. I've tried using my Windows 7 install disc's repair function, but that did nothing beyond wasting about 10 minutes of my time. I'm currently posting from an Ubuntu live CD. Any help?

    Read the article

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • Tail and wildcard characters

    - by Mitch
    I want to get the last 10 lines of multiple files. I know they all end with "-access_log". So I tried: tail -10 *-access_log But this gives me an error, where as: tail -10 file-* Gives me the output I'd expect. I would think this probably has more to do with BASH then tail. However commands like: cat *-access_log Work fine. Any suggestions?

    Read the article

  • Ubuntu Wired network(ethernet does not work)

    - by user36777
    It was working just fine, until the other day I yanked it out. The wireless works just fine on the same router. If I login to a windows 7 instance on this dual boot laptop then the ehternet works just fine. So it's not a hardware, cable or router issue. The card even gets an ip, but I can't connect to the internet. Here are the details from route, iptables, ifconfig, ping etc. Any ideas? I have been struggling with this for day, none seems to have an answer. http://pastie.org/954816

    Read the article

  • Allow PHP to write file without 777

    - by camerongray
    I am setting up a simple website on webspace provided by my university. I do not have database access so I am storing all the data in a flat file. The issue I am experiencing is related to file permissions. I need PHP to be able to read and write the data file but I don't really want to set the file to 777 as anybody else on the system could modify it, they already have read access to everyone's web directories. Does anyone have any ideas on how to accomplish this? Thanks in advance

    Read the article

  • Keeping folder of files in sync over 3 machines

    - by Wizzard
    Morning, Got 3 machines that have user content on them, which I need to keep in sync. This is a 3 way sync. Currently I run rsync but we just don't handle deletes. Have looked at something like gluster, but that seems a little over the top Any other software out there to do a 3 way sync, or a good network file system...? There is for web servers so we don't want a slow / IO hungry process. 3 servers... user content could be added to 1 and needs to be moved to other two.

    Read the article

  • Ubuntu stops auto-mounting flash drive

    - by Brian
    It seems that after being up for a few days, my Ubuntu system refuses to auto-mount hot-plugged USB disks (i.e. flash drives). The output from dmesg shows that the kernel recognizes the device correctly. The only solution I'm aware of at the moment is to reboot (logging out may work as well, but the impact is the same since I have a bunch of stuff open and it takes a few minutes to get everything situated after startup/login). I thought gvfs-fuse-daemon was the thing responsible for managing filesystems in userspace, but killing and restarting that doesn't help. Any other ideas?

    Read the article

  • Java kills sound on Karmic

    - by hasen j
    Every time I run a java application, one of two things happen: Either I lose sound in all other programs (even after quitting the java app) or if some other application is already playing sound, the said java app doesn't have sound Usually this can be fixed by running pulseaudio --kill from the command line, but it doesn't always work. Is there a way to fix this problem? This didn't happen before the upgrade to karmic. Other info: The java I'm using is Sun's Java

    Read the article

  • Distributed filesystem across a slow link

    - by Jeff Ferland
    I have an image in my head where a link is too slow to realize the real-time transfer of files, but fast enough to catch up every day. What I'd like to see is a master <- master setup where when I write a file to Server A, the metadata will transfer to Server B immediately and the file will transfer at idle or immediately when Server B's client tries to read the file before Server A has sent it. It seems that there are many filesystems which can perform well over fast links, but I don't know of any that do well with a big bottle neck and a few hours of latency.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Formatting LVM partition with HFS+

    - by Pyetras
    I've already tried fsck.hfsplus from hfsprogs, which doesn't do anything at all, and gparted (doesn't work with LVM). Are there any other ways to do that? If all else fails I have OSX install DVD, but I'm not sure if its installer would see a LVM partition (and running it just to check that would be quite troublesome, as I don't have a DVD drive ATM).

    Read the article

  • migration of physical server to a virtual solution, what i have to do?

    - by bibarse
    Hello I'm new in this forum, so i would like that you forgive me for my blissfully and my low English level. I'm a trainee in company one month ago, and my mission is to migrate 3 physicals servers to a virtualization technology. The company edit softwares for E-learning so there are lots of data like videos, flash and compressed (zip). This is some inventory of the servers: OS: Debian, 2 redhat, apache, php/mysql, sendMail/Dovecot, webmin with virtualmin template to create dynamically the web sites because there is no sysadmin ... The future provider will be responsible of to secure, update and create the virtual machines (outsourcing) and with a RedHat OS's. So i want that you help me to choose a virtualisation technologie (for the i prefer KVM of Redhat RHEV, VMWare is expensive), how evaluate the hardware needs (this for evolution of 4 or 5 years) and to elaborate a good planing to don't forget any think. Thank you for your responses.

    Read the article

  • List installed packages with the repo they came from?

    - by Sandra
    With rpm it is possible to list installed packages with additional info rpm -qa --queryformat "%-35{NAME} %-35{DISTRIBUTION} %{VERSION}-%{RELEASE}\n" | sort -k 1,2 -t " " -i which will produce something like xorg-x11-drv-ur98 (none) 1.1.0-1.1 xorg-x11-drv-vesa CentOS-5 1.3.0-8.3.el5 xorg-x11-drv-vga (none) 4.1.0-2.1 xorg-x11-drv-via (none) 0.2.1-9 On Ubnutu server would I like to list all installed packages and show from which repository in came from. Can that be done?

    Read the article

  • Strange filesystem behavior, Ubuntu 9

    - by Fixee
    I have two windows open on the same machine (Ubuntu 9, ia32, server). I'll call these windows W1 and W2. W1: $ cd ~/test $ ls sample $ In W2 I run "make" from a parent directory that recreates file test/sample: $ make project . . $ cd test $ ls sample $ Now, returning to W1: $ ls $ cd ../test $ ls sample $ In other words, after I build from another window and the file test/sample is replaced, ls shows the file as missing in the 2nd window until I cd ../test back into the directory whereupon it reappears. I can give more details if required, but just wondering if this is a well-known behavior.

    Read the article

  • Proper way to configure ~/.Xsession with a standalone window manager to gracefully end a session

    - by cYrus
    I'm using xdm and my ~/.Xsession looks like this: # <initialization stuff here> exec openbox It works, but I've noticed that when I log out Openbox doesn't gracefully kill all the applications. In particular Google Chrome complains about that. How can I make sure to wait for all processes to exit (just like others configurations: Gnome, KDE, Windows ...)? The only (ugly) solution that I've found involves sleep and kill into ~/.Xsession.

    Read the article

  • Apache on Ubuntu very slow on inital calls, very fast afterwards

    - by papakost
    I own an Ubuntu 10 VPS Server with Apache 2 hosting a Magento website. The first hit to the site from any client takes about 15-20 sec, while the subsequent hits from the same client take 0-1 sec. I suppose it doesn't have to do with Magento caching, because this happens also when the first call is on a very light page and the next calls are on heavy ones. Does anyone have an idea on what is going wrong here?

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >