Search Results

Search found 24630 results on 986 pages for 'kali linux'.

Page 341/986 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • What do I need to do to set my computer as Default Gateway?

    - by Vaibhav
    We are trying to put together a box with dual LAN cards (let's say Outer and Inner), where the Inner LAN card is supposed to act as a default gateway on the network it is connected to. This box is running Ubuntu. The basic purpose for this box is to take messages generated on the inner network, do some work with them and forward them out the Outer LAN card to a server. The inner network is completely isolated with simply a regular switch connecting the Inner LAN Card with two other boxes. These other boxes either throw out multi-cast messages (which the Inner LAN Card is listening to), or send out unicast messages meant for the server which is not on this inner network. So, we need the Inner LAN Card to act as a default gateway, where these unicast messages will then be sent, and the code on the dual-LAN Card box can then intercept and forward these messages to the server. Question: 1. How do we setup the LAN Card to be default gateway (does it need some configuration on Ubuntu)? 2. Once we have this setup, is it a simple matter of listening to the interface to intercept the incoming messages? Any help (pointers in the right direction) is appreciated. Thanks.

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • Apache2 BufferedLogs On - anybody using it ?

    - by Qiqi
    Greetings, I am wondering, whether anybody is using BufferedLogs On with Apache2 and found any issues ? Feature is marked as experimental, but for many years now, so I guess it's rather pretty stable. I am running some servers with constrained disk IO capacity at the moment, so I turned it on hoping that even a small benefit could help in the long run ;-) I do have several to several hundreds requests per seconds so by my thoughts there is really no need to write to log after each request, cause honestly I don't think that my filesystem is the best handler for many unnecessary writes. (OCFS2 shared among several DomUs in the Xen)

    Read the article

  • Ubuntu: Unsupported Sources

    - by Scott
    What kind of software is available in that source tree ? Also what is the best way to see what kind of software is available in a repository ? I have always reluctant to enable it because I didn't want to wind up with an unstable system.

    Read the article

  • How to upgrade XBMC Live from 9.04.1 to 9.11?

    - by sunpech
    I've been unable to do a fresh install of XBMC Live 9.11 to my hard drive. Everytime it fails at the Install System step. But I am able to get XBMC Live 9.04.1 to install successfully. How do I upgrade XBMC Live 9.04.1 to 9.11? I understand that Ctrl+Shift+F2 brings up the command line, but what are the next set of commands to run?

    Read the article

  • Start multiple instances of Firefox; Xephyr rootless mode

    - by Vi
    How can I have multiple independent instances of Mozilla Firefox 3.5 on the same X server, but started from different user accounts (consequently, different profiles)? Limited success was only with Xephyr :1, DISPLAY=:1 /usr/local/bin/firefox, but Xephyr has no Cygwin/X's "rootless" mode so it's not comfortable. The idea is to have one Firefox instance for various "Serious Business" things and the other for regular browsing with dozens of add-ons securely isolated.

    Read the article

  • snmptt not translating traps, even with translate_log_trap_oid=1

    - by mbrownnyc
    I am having some trouble configuring snmptt to properly translate snmp traps. The following is a problem: /etc/snmp/snmptt.conf reflects: EVENT fgFmTrapIfChange .1.3.6.1.4.1.12356.101.6.0.1004 "Status Events" Critical FORMAT $* EXEC /usr/local/nagios/libexec/eventhandlers/submit_check_result $r "snmp_traps" 2 "$O: $+*" "$*" SDESC Trap is sent to the managing FortiManager if an interface IP is changed Variables: 1: fnSysSerial 2: ifName 3: fgManIfIp 4: fgManIfMask EDESC when a trap is received, /var/log/messages reflects: Sep 6 12:07:32 SNMPMANAGERHOST snmptrapd[15385]: 2012-09-06 12:07:32 <UNKNOWN> [UDP: [192.168.100.2]:162->[192.168.100.31]]: #012.1.3.6.1.2.1.1.3.0 = Timeticks: (707253943) 81 days, 20:35:39.43 #011.1.3.6.1.6.3.1.1.4.1.0 = OID: .1.3.6.1.4.1.12356.101.6.0.1004 #011.1.3.6.1.4.1.12356.100.1.1.1.0 = STRING: FGTNNNNNNNNN #011.1.3.6.1.2.1.31.1.1.1.1.10 = STRING: internal4 #011.1.3.6.1.4.1.12356.101.6.2.1.0 = IpAddress: 192.168.65.100 #011.1.3.6.1.4.1.12356.101.6.2.2.0 = IpAddress: 255.255.255.0 Sep 6 12:07:37 SNMPMANAGERHOST icinga: EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT; 192.168.100.2; snmp_traps; 2; enterprises.12356.101.6.0.1004: enterprises.12356.100.1.1.1.0:FGTNNNNNNNNN ifName.10:internal4 enterprises.12356.101.6.2.1.0:192.168.65.100 enterprises.12356.101.6.2.2.0:255.255.255.0 Since the icinga entry reflects the EXEC, it's obvious there is no translations occurring by snmptt. I have verified that translate_log_trap_oid and net_snmp_perl_enable is enabled in snmptt.ini When using --debug=1 to start snmptt, I see the following in the --debugfile: ********** Net-SNMP version 5.05 Perl module enabled ********** The main NET-SNMP version is reported as NET-SNMP version: 5.5. What else can be done to verify that snmptt is configured properly to translate traps? I have run snmptt-net-snmp-test to verify whatever net-snmp-perl version I have installed properly supports translations. The output indicates it does. /root/snmptt_1.3/snmptt-net-snmp-test --best_guess=2 SNMPTT Net-SNMP Test v1.0 (c) 2003 Alex Burger http://snmptt.sourceforge.net MIBS:RFC1213-MIB best_guess: 2 Testing translateObj ******************** Testing: .1.3.6.1.2.1.1.1, long_names=disabled, include_module=disabled Test passed. Result: sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=disabled, include_module=enabled Test passed. Result: RFC1213-MIB::sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=enabled, include_module=disabled Test passed. Result: .iso.org.dod.internet.mgmt.mib-2.system.sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=enabled, include_module=enabled Test passed. Result: RFC1213-MIB::.iso.org.dod.internet.mgmt.mib-2.system.sysDescr Testing: sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: RFC1213-MIB::sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: RFC1213-MIB::system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: .iso.org.dod.internet.mgmt.mib-2.system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing getType *************** Testing: .1.3.6.1.2.1.4.1 Test passed. Result: INTEGER Testing: ipForwarding Test passed. Result: INTEGER Testing Description ******************* Test passed. Result: ------------------------------------------------- The indication of whether this entity is acting as an IP gateway in respect to the forwarding of datagrams received by, but not addressed to, this entity. IP gateways forward datagrams. IP hosts do not (except those source-routed via the host). Note that for some managed nodes, this object may take on only a subset of the values possible. Accordingly, it is appropriate for an agent to return a `badValue' response if a management station attempts to change this object to an inappropriate value. ------------------------------------------------- I have manually gone through the MIB with the definition that's not resolving, and verified that it is properly linking back to the proper resolved definition. It is: FORTINET-FORTIGATE-MIB.txt contains: fgFmTrapIfChange NOTIFICATION-TYPE OBJECTS { fnSysSerial, ifName, fgManIfIp, fgManIfMask } STATUS current DESCRIPTION "Trap is sent to the managing FortiManager if an interface IP is changed" ::= { fgFmTrapPrefix 1004 } fgFmTrapPrefix OBJECT IDENTIFIER ::= { fgMgmt 0 } fgMgmt OBJECT IDENTIFIER ::= { fnFortiGateMib 6 } fnFortiGateMib ::= { fortinet 101 } IMPORTS FnBoolState, FnIndex, fnAdminEntry, fnSysSerial, fortinet FROM FORTINET-CORE-MIB fortinet MODULE-IDENTITY ::= { enterprises 12356 } LOOKS GOOD!!!!! 1.3.6.1.4.1.12356.101.6.0.1004 I've exhausted all the documentation and even posted fruitlessly in the snmptt-users mailing list. I can not prove it is the MIB. Why would snmptt fail to translate traps? Thanks, Matt

    Read the article

  • Proxying from nginx to Jetty

    - by newbie
    I'm proxying request from nginx to Jetty, but I have problem with request that Jetty receives. Jetty requests shows that request IP address is 127.0.0.1. But I want real server IP and my site has multiple domains, so when request is coming from some domain name to my server, it must available in Jetty request too. nginx config: server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name localhost; access_log /var/log/nginx/localhost.access.log; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; } } Servlet request: Dump Servlet getMethod: GET getContentLength: -1 getContentType: null getRequestURI: /dump/info getRequestURL: http://127.0.0.1:8080/dump/info getContextPath: getServletPath: /dump getPathInfo: /info getPathTranslated: /tmp/jetty-0.0.0.0-8080-test.war-_-any-/webapp/info getQueryString: null getProtocol: HTTP/1.0 getScheme: http getServerName: 127.0.0.1 getServerPort: 8080 getLocalName: 127.0.0.1 getLocalAddr: 127.0.0.1 getLocalPort: 8080 getRemoteUser: null getUserPrincipal: null getRemoteAddr: 127.0.0.1 getRemoteHost: 127.0.0.1 getRemotePort: 50905 getRequestedSessionId: 6ubs42zhm5q61k5hm84ni3ib isSecure(): false isUserInRole(admin): false getLocale: en_US getLocales: en_US getLocales: en

    Read the article

  • Sharing hp Deskjet F380 using cups via http driver issues with xp client

    - by ageis23
    Hi the problem is xp doesn't have built in drivers for my printer but vista does. On vista it works perfectly without any issues. However when I try using xp, it insists that I select a driver from the selection xp offers by default. The drivers I've downloaded from HP don't support networking. Hp have stated they're non networkable. Is there anything I can do about this? Any help is greatly appreciated and would save me getting ear ache!

    Read the article

  • Dedicated server automatic backup solution

    - by Luigi
    I have a dedicated Ubuntu web server in a cloud environment, and I am looking for a nice way to do automated backups. I would like to backup some directories with web apps, and all my MySql databases. As for destination: make snapshots every two hours localy, and every six hours to a remote ftp server. Also delete backup archives older than seven days(localy + ftp), and notify on any problems by email. Now to achieve some of this functionality I use cron + shell script, and http://www.mysqldumper.net/, but really that doesn't answer my needs. Mysqldumper doesn't know automaticly about new databases, and shell script does not notify on problems. It's something I have to check out from time to time, and i don't have trust for. I googled a while, and seems like most people solve this stuff with shell scripts. Is this a method you can trust? Are there any web-gui tools, I'm missing? Maybe there is a smarter startegy for doing this? I'm a little bit confused.

    Read the article

  • IP address reuse on macvlan devices

    - by Alex Bubnoff
    I'm trying to create easy to use and possibly simple testing environment for some product and got some strange behaviour of macvlan's. What I'm trying to achieve: make a toolset for one-line start/stop of lxc containers(via docker) bound to external ip(I have enough of it on host machine). So, I'm doing something like this: docker run -d -name=container_name container_image pipework eth1 container_name ip/prefix_len@gateway and pipework here does this: GUEST_IFNAME=ph$NSPID$eth1 ip link add link eth1 dev $GUEST_IFNAME type macvlan mode bridge ip link set eth1 up ip link set $GUEST_IFNAME netns $NSPID ip netns exec $NSPID ip link set $GUEST_IFNAME name eth1 ip netns exec $NSPID ip addr add $IPADDR dev eth1 ip netns exec $NSPID ip route delete default ip netns exec $NSPID ip link set eth1 up ip netns exec $NSPID ip route replace default via $GATEWAY ip netns exec $NSPID arping -c 1 -A -I eth1 $IPADDR And it works for first time per IP. But for second time and later packets for containers IP isn't getting into container, while all configuration seem fine. So it looks like this: External machine ? ping 212.76.131.212 ....silence.... Host machine root@ubuntu:~# ip link show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip addr show eth1 2: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:15:17:c9:e1:c9 brd ff:ff:ff:ff:ff:ff root@ubuntu:~# tcpdump -v -i eth1 icmp tcpdump: WARNING: eth1: no IPv4 address assigned tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes 00:00:46.542042 IP (tos 0x0, ttl 60, id 9623, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2345, length 64 00:00:47.549969 IP (tos 0x0, ttl 60, id 9624, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2346, length 64 00:00:48.558143 IP (tos 0x0, ttl 60, id 9625, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2347, length 64 00:00:49.566319 IP (tos 0x0, ttl 60, id 9626, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2348, length 64 00:00:50.573999 IP (tos 0x0, ttl 60, id 9627, offset 0, flags [DF], proto ICMP (1), length 84) 5.134.221.98 212.76.131.212: ICMP echo request, id 6718, seq 2349, length 64 ^C 5 packets captured 5 packets received by filter 0 packets dropped by kernel 1 packet dropped by interface Host machine, netns of container root@ubuntu:~# ip netns exec 32053 ip link show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff root@ubuntu:~# ip netns exec 32053 ip addr show eth1 48: eth1@if2: mtu 1500 qdisc noqueue state UNKNOWN link/ether b2:12:f7:cc:a1:9d brd ff:ff:ff:ff:ff:ff inet 212.76.131.212/29 scope global eth1 inet6 fe80::b012:f7ff:fecc:a19d/64 scope link valid_lft forever preferred_lft forever root@ubuntu:~# ip netns exec 32053 tcpdump -v -i eth1 icmp tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes ....silence.... ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel So, can anyone say, what can it be? Can this be caused by not a bug in macvlan implementation? Is there any tools I can use to debug that configuration?

    Read the article

  • Install PHP on RedHat

    - by John R
    I just ran yum install php in the command prompt. Everything went fine ('complete!') as per the dialogue. I then uploaded a file that does not use short tags and is named with the proper extension (i.e., the name is test.php). <?php print "hello world"; ?> When I navigate my browser to test.php it just prints each of the characters shown above; i.e., PHP is not interpreting it. What might be the problem? Also, if there is a configuration file that needs to be updated, please tell me what directory path I am likely to find that file. Edit: Apache2 & Redhat

    Read the article

  • Problems connecting Centos on VMware player to the network using bridged connection.

    - by Sakin
    Hi, I installed CentOs on VMware Player 3.0.1 running on windows XP. When trying to configure it to connect to the internet in a bridged configuration, I get an error message when trying to bring up the network interface: [root@VMLinux ~]# /et/init.d/network start Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... failed [FAILED] VM is running on a machine that has access to the network, I tried it on two different networks that have DHCP enabled. Everything works fine when using a NAT connection through my host. How can I make the bridge work for me? Thanks.

    Read the article

  • How can I register a custom protocol with xdg ?

    - by julien
    I've been struggling this morning trying to associate an application with a custom protocol, namely emacsclient and org-protocol. I'm calling this protocol from a webbrowser bookmarklet, and I get the following behaviour : In chromium, the "Launch Application" dialog comes up, and calls xdg-open org-protocol://... which ends up firing a new chromium frame. In firefox, I've tried setting network.protocol-handler.app.org-protocol to an empty string or my emacsclient path, anyhow I get the following error message : "Firefox doesn't know how to open this address, because the protocol (org-protocol) isn't associated with any program" without even showing any external application selection dialog. I'm not using any desktop environment, so I need to make this work strictly with xdg, however, despite reading the shared mime info spec etc, I still can't fathom a working configuration.

    Read the article

  • Command-line way to send keystrokes to a window open on a different X-session

    - by Sanjay Manohar
    I have a Ubuntu desktop open and logged on, on my main machine. I am then also logging in to this machine from a remote computer, using X2go which creates a new X-session. I have a libreoffice file open on the original session. I want to save this file and close it - but from the remote machine! (Both sessions have same user logged in; I can sudo if needed) I have tried using xdotool search but this fails to find the window. Is there a way to do what I want from this second session? I can see the process with ps -A I can even see the open file with lsof! How can I do a "save-and-close" on it?

    Read the article

  • Missing NFS service link?

    - by Recc
    # ps ax | grep nfs 1108 ?        S<     0:00 [nfsd4] 1109 ?        S<     0:00 [nfsd4_callbacks] 1110 ?        S      0:00 [nfsd] 1111 ?        S      0:00 [nfsd] 1112 ?        S      0:00 [nfsd] 1113 ?        S      0:00 [nfsd] 1114 ?        S      0:00 [nfsd] 1115 ?        S      0:00 [nfsd] 1116 ?        S      0:00 [nfsd] 1117 ?        S      0:00 [nfsd] 4437 ?        S<     0:00 [nfsiod] 16799 ?        S      0:00 [nfsv4.0-svc] 18091 pts/1    S+     0:00 grep nfs But # service nfs status nfs: unrecognized service That'll be on Ubuntu 11.04 am I missing a sym link or something? How can I fix this quickly?

    Read the article

  • Connect to SVN through Eclipse on Ubuntu

    - by Gene R
    We have a Subversion server running on an internal server. I'm trying to connect to it through SubClipse (Eclipse) on Ubuntu. When I enter the URL: svn://servername/site/trunk as I do from Windows. I get the following error: Error validating location: "org.tigris.subversion.javahl.ClientException: svn: Malformed network data" Anybody got any ideas?

    Read the article

  • Using gentoo, how does one stick -9999 ebuild to a specific svn revision?

    - by hurikhan77
    As an example given the django-9999 ebuild, to match the developers environment I need to checkout R12120 from trunk. Installing Django manually is not option due to package management reasons. But there is also no ebuild in portage for 1.2 beta versions. So I did the following: ESVN_OPTIONS="-r12120" emerge -1a django Which installed the required revision from svn. But this is cumbersome in a way. Is there some way to define this statically per ebuild, eg something like: DJANGO_SVN_REV="12120" in make.conf. This would be much cleaner in my eyes. Because next time I need to rebuild django for whatever reason, I need to remember: "Oh I wanted this to stick to a specific revision" and next question will be "err, f&!#$?%, what was it again?" What's the best way to go here? Keep in mind: Manually installing packages without package manager knowledge is no option Working around with manual emerge variable prefixing is no option Setting up a /etc/portage/package.env would be a way to go (as described here) but that seems pretty unsupported and kludgy to me and thus unpreferable Modifying make.conf would be a way to go Keeping the ebuild in an overlay would be an option

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • Installing Midnight Commander from sources (no root privileges)

    - by ouroboros
    I tried to configure ./configure --prefix=/localfolder glib-2.26.1/ make make install but it fails at make stage. trying to configure mc-4.6.1/ and make doesn't obviously work. What are the steps I need to make in order to install midnight comander for my local user in a custom folder? Make for glib gives me these errors /usr/bin/msgfmt: found 2 fatal errors cp: cannot stat `test.mo': No such file or directory gmake[4]: *** [test.mo] Error 1 gmake[4]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio/tests' gmake[3]: *** [all-recursive] Error 1 gmake[3]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio' gmake[2]: *** [all] Error 2 gmake[2]: Leaving directory `/remote/folder/mc/glib-2.26.1/gio' gmake[1]: *** [all-recursive] Error 1 gmake[1]: Leaving directory `/remote/folder/mc/glib-2.26.1' gmake: *** [all] Error 2

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • Alternative to Windows Home Server (WHS) backups

    - by Adam Tegen
    Since Microsoft announced the end of life for WHS, are there any alternatives? Specifically, I am interested in recovering from a catastrophic disk failure with WHS. For example, this is my ideal scenario when a desktop hard-drive fails (has a bad virus, etc): Install a disk of the same size or greater Boot the desktop with the Recovery Disc Point the recovery application at the WHS Pick the machine, the drive(s) and the date of the backup Have a couple beers Reboot to a working machine as if nothing happened. I would need to slap multiple disks in the machine without raid. It sounds like LVM will work here. It would be nice, but not required to have de-duplication of files when multiple machines are backed up. (Single Instance Storage)

    Read the article

  • Debian tuning for increasing read/write buffer.

    - by Claudiu
    Is there a way to modify Debian settings so the memory could be used more for disk read/write caching ? I am already using RAID 0 but thats not enough for multiple users, and the disk is almost struggled. Torrents use the disk very much and rTorrent doesn't have cache settings.

    Read the article

  • Diagnosing high CPU waiting

    - by Will
    I have a monitoring server that is running icinga/collectd/graphite with about 50 hosts. I have noticed high load/slugging performance on the box. If you take a look at top, you'll see: Cpu(s): 0.6%us, 0.2%sy, 0.0%ni, 7.6%id, 23.4%wa, 0.0%hi, 0.2%si, 0.0%st Notice the HUGE %wa value, which as far as I know means a network or disk bottleneck. ifconfig shows no dropping packets and there's not a ton of bandwidth going on, so that leaves disk issues, right? There's not a lot of disk writing going on either...iotop is reporting we're only writing a little over 1 MB per second and the RAID tool reports everything is A-OK and write caching is enabled. How do I go about trying to figure out how to fix this? UPDATE: iostat -x output is: avg-cpu: %user %nice %system %iowait %steal %idle 0.62 0.10 0.31 9.65 0.00 89.31 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.21 33.34 83.55 16.54 1599.94 399.07 19.97 43.21 416.98 3.71 37.13

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >