Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 398/1078 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Using more recent kernel for Xen Dom0 in production.

    - by thelsdj
    Does anyone have experience running Xen dom0 on a more recent kernel than the stock 2.6.18? What host distro are you running? What release of Xen (or hg/git changeset)? What set of patches are you using on kernel source? (Has anyone got the pvops dom0 stuff working in production or is it better to stick with something like the SUSE patches? Any other tips and tricks to running a more recent kernel version as dom0 would be helpful.

    Read the article

  • iptables to block VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • How do I Forward root's email to an external email address?

    - by ErebusBat
    I have a small server (Ubuntu 10.04) at my house and I would like to forward root's email to my gmail hosted domain to get security notifications and what not. I ripped everything out and started from scratch and ran into some other issues. I now have sendmail working in the sense that I can mail [email protected] and get the mail. HOWEVER, adding an address to /root/.forward does not actually forward the message. I get the following in my logs: Dec 22 14:04:37 batcave sendmail[4695]: oBML4bAT004695: to=<root@batcave>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30075, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBML4bJ9004696 Message accepted for delivery) Dec 22 14:04:39 batcave sm-mta[4698]: STARTTLS=client, relay=[69.145.248.18], version=TLSv1/SSLv3, verify=FAIL, cipher=DES-CBC3-SHA, bits=168/168 Dec 22 14:04:40 batcave sm-mta[4698]: oBML4bJ9004696: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:00:03, xdelay=00:00:03, mailer=relay, pri=120336, relay=[69.145.248.18] [69.145.248.18], dsn=2.0.0, stat=Sent (OK 01/D4-00853-216621D4) You can see where my local sendmail instance accepts it then hands it off to my ISP, but with the wrong address ([email protected]).

    Read the article

  • OpenLDAP ACLs are not working

    - by Dr I
    First things first, I'm currently working with an OpenLDAP: slapd 2.4.36 on a Fedora release 19 (Schrödinger’s Cat). I've just install the openldap with yum and my configuration is the following one: ##### OpenLDAP Default configuration ##### # ##### OpenLDAP CORE CONFIGURATION ##### include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema pidfile /var/lib/ldap/slapd.pid loglevel trace ##### Default Schema ##### database mdb directory /var/lib/ldap/ maxsize 1073741824 suffix "dc=domain,dc=tld" rootdn "cn=root,dc=domain,dc=tld" rootpw {SSHA}SECRETP@SSWORD ##### Default ACL ##### access to attrs=userpassword by self write by group.exact="cn=administrators,ou=builtin,ou=groups,dc=domain,dc=tld" write by anonymous auth by * none I launch my OpenLDAP service using: /usr/sbin/slapd -u ldap -h ldapi:/// ldap:/// -f /etc/openldap/slapd.conf As you can see it's a pretty simple ACL which aim to allow access to the userPassword attribute to a specific group read only, then to the owner read and write to anonymous requiring auth and refuse the access to everyone else. The problem is: Even using a valid user with correct password my ldapsearch ends with zero informations retrieved from the directory, plus I've got a strange response on the result line. # search result search: 2 result: 32 No such object # numResponses: 1 here is the ldapsearch request: ldapsearch -H ldap.domain.tld -W -b dc=domain,dc=tld -s sub -D cn=user,ou=service,ou=employees,ou=users,dc=domain,dc=tld I did not specify any filter as I want to check that ldapsearch is correctly printing only allowed attribute.

    Read the article

  • Not able to connect to internet in RHEL 5

    - by chankey007
    I just installed red hat enterprise 5 in my laptop and on desktop both. In desktop it is working fine but in laptop it is not showing the eth device. (I tried ifconfig only lo is there). I tried ifup eth0 still nothing happened. The network service in /etc/sysconfig/network-scripts/ifcfg-eth0 and in /etc/sysconfig/network are enable. I had ubuntu in my laptop before and I faced same problem with that too. Is there any problem with my laptop? I have my system on dual boot and in win7 networking is running fine. I am not able to connect to the internet only, other devices are working fine. System conf: Sony vaio E series. 3 GB RAM, intel core i3 2.13 GHZ.

    Read the article

  • Not able to connect to internet in RHEL 5

    - by chankey007
    I just installed red hat enterprise 5 in my laptop and on desktop both. In desktop it is working fine but in laptop it is not showing the eth device. (I tried ifconfig only lo is there). I tried ifup eth0 still nothing happened. The network service in /etc/sysconfig/network-scripts/ifcfg-eth0 and in /etc/sysconfig/network are enable. I had ubuntu in my laptop before and I faced same problem with that too. Is there any problem with my laptop? I have my system on dual boot and in win7 networking is running fine. I am not able to connect to the internet only, other devices are working fine. System conf: Sony vaio E series. 3 GB RAM, intel core i3 2.13 GHZ.

    Read the article

  • RPM with RHEL: install 2 version of same package / different arch

    - by Nicolas Tourneur
    I think the title is pretty self explanatory :) Is it possible, under RHEL (v 5) to install 2 instances of the same packages with 32 bit support for one and 64 bits support for the other one? Obviously, the running host has a 64 bit kernel and has the compatibility libraries required. (in this case, we would need a 64 bits JDK and a 32 bits one). If yes, are there any special rpm flag to use (change of installation directory for instance)? Thanks in advance,

    Read the article

  • How do I make zeitgeist work in Arch?

    - by wleoncio
    I've been trying to setup Zeitgeist on my Gnome-shell system for a couple of days, but I'm yet to get it to work. I've done everything I could think of, i.e. installing zeitgeist from [extra], as well as libqzeitgeist. I've also installed all Gnome extensions created by Seif (https://extensions.gnome.org/accounts/profile/seif), since they're the reason I'm installing the package in the first place. I've tried running "zeitgeist-daemon --replace" and then "gnome-shell --replace", but nothing seems to work. According to Der Harm's wiki (https://wiki.archlinux.org/index.php/User:Der_harm#Gnome_Zeitgeist), the Zeitgeist daemon doesn't need to be explicitly started, but even if it was, I don't know how to do it (since it's not in /etc/rc.d, I bet adding "zeitgeist" to my rc.conf wouldn't do any good either). I can't believe there isn't a very simple setup here, please help me see what I'm missing!

    Read the article

  • How can one associate a 3ware controller with the corresponding /dev/tw?? device?

    - by barbaz
    I have a few 3ware RAID controllers installed in a system. Is there any way to figure out the mapping between the following identifiers, each describing in a way the very same RAID controller? The tw_cli reported controller id (e.g. c0,c1,c2,...) The corresponding device nodes that allow smartctl access via the 3ware driver (e.g. /dev/twa0, /dev/twa1, /dev/twl0) The block device presented to the system representing a RAID unit (/dev/sda, /dev/sdb,...)

    Read the article

  • Determine Location of Inode Usage

    - by Dave Forgac
    I recently installed Munin on a development web server to keep track of system usage. I've noticted that the system's inode usage is climbing by about 7-8% per day even though the disk usage has barely increased at all. I'm guessing something is writing a ton of tiny files but I can't find what / where. I know how to find disk space usage but I can't seem to find a way to summarize inode usage. Is there a good way to determine inode usage by directory so I can locate the source of the usage?

    Read the article

  • Anyone know a good web-based file upload package?

    - by Ted Wexler
    Basically, what I'm looking for is a place for either one of our end users to be able to upload a file to this package, after either receiving a code from one of our support engineers or vice-versa(our engineers upload a file and send a code/link/something to end user) I've spent a bunch of time googling this, I found this: http://turin.nss.udel.edu/programming/dropbox2/, but the code there scares me, and it also doesn't render properly using PHP 5.3(uses short tags, who knows what else.) Does anyone have any recommendations?

    Read the article

  • Apache won't follow Symlink

    - by Marvin Dickhaus
    I have a LAMP server (Ubuntu 12.10) setup on my development machine. It is a T60 modified with an SSD. The server base is in /var/www. Apache has the following config: DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews SymLinksIfOwnerMatch AllowOverride all Order allow,deny allow from all </Directory> I'm currently developing a SilverStripe CMS featured site. The folder for the server is /var/www/sfk/. The framework and all cms relavant features are in their respective folders. The only folder that need to be modified would be the /var/www/sfk/mysite folder. Because of that I want to keep the mysite folder under my home directory and symlink it into the server folder. So here is what I've done: ln -s ~/sfk/mysite/ /var/www/sfk/ sudo chgrp www-data /var/www/sfk/mysite -R ls tells me the following: /var/www/sfk (exerpt) drwxr-xr-x 3 marvin www-data 4096 Nov 16 16:53 assets drwxr-xr-x 12 marvin www-data 4096 Nov 16 16:53 cms drwxr-xr-x 29 marvin www-data 4096 Nov 16 16:53 framework -rw-r--r-- 1 marvin www-data 2410 Nov 16 16:53 index.php lrwxrwxrwx 1 marvin www-data 24 Nov 20 17:45 mysite -> /home/marvin/sfk/mysite/ -rw-rw-r-- 1 marvin www-data 514 Nov 16 16:55 _ss_environment.php drwxr-xr-x 4 marvin www-data 4096 Nov 16 16:53 themes and ls /var/www/sfk/mysite/ drwxrwxr-x 6 marvin www-data 4096 Nov 16 00:15 code drwxrwxr-x 2 marvin www-data 4096 Nov 16 11:51 _config -rwxrwxr-x 1 marvin www-data 2685 Nov 16 15:39 _config.php drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 css drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 images drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 javascript drwxrwxr-x 5 marvin www-data 4096 Nov 16 00:15 templates This is literally the same setup I have on my desktop machine. The problem I have is that the mysite/ folder is just not recognized. I'm thankful for every advice I get. I'm frustrated because I'm stuck with this issue for hours.

    Read the article

  • Groups and Symlinks, is this safe?

    - by sjohns
    Hi, Im trying to serve similar content over two websites, but don't want to have 2 of each file, especially when they are growing. The basics, im running CentOS, with cPanel. Is it safe to do the following, I have folder downloads1 in /home/user1/www/downloads1/ i have user2, can i make a group - groupadd sharedfiles add both users to the group: useradd -g sharedfiles user1 useradd -g sharedfiles user2 then chown -r -v user1:sharedfiles downloads1/ User 2 i want to have /home/user2/www/downloads1 but i want it to be a symlink like ln "downloads1" "/home/user1/www/downloads1/" lrwxrwxrwx 1 user2 sharedfiles 11 May 9 14:20 downloads1 -> /home/user1/www/downloads1/ Is this a safe practice? Or is there a better way to do this if I want them both to be able to share the files for distribution over apache. Is there any drawbacks to this? Thanks in advance for any light shed on this. I'm not 100% sure weather this should have gone here or on serverfault.

    Read the article

  • Groups and Symlinks, is this safe?

    - by sjohns
    Hi, Im trying to serve similar content over two websites, but don't want to have 2 of each file, especially when they are growing. The basics, im running CentOS, with cPanel. Is it safe to do the following, I have folder downloads1 in /home/user1/www/downloads1/ i have user2, can i make a group - groupadd sharedfiles add both users to the group: useradd -g sharedfiles user1 useradd -g sharedfiles user2 then chown -r -v user1:sharedfiles downloads1/ User 2 i want to have /home/user2/www/downloads1 but i want it to be a symlink like ln "downloads1" "/home/user1/www/downloads1/" lrwxrwxrwx 1 user2 sharedfiles 11 May 9 14:20 downloads1 -> /home/user1/www/downloads1/ Is this a safe practice? Or is there a better way to do this if I want them both to be able to share the files for distribution over apache. Is there any drawbacks to this? Thanks in advance for any light shed on this. I'm not 100% sure weather this should have gone here or on serverfault.

    Read the article

  • Why isn't this smbmount attempt working?

    - by Max Williams
    I can successfully access one of our local samba shares, which is on a windows pc (called marina) as follows: $ sudo /usr/bin/smbclient \\\\marina\\resource_library <my password> Domain=[MARINA] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] smb: \> So, that works. I'm now trying to mount the above location (the resource_library folder on marina) to /mnt/resource_library (as a read only folder), but it keeps failing - i've tried a few variations of specifying the location: $ sudo smbmount \\\\marina\\resource_library /mnt/resource_library -o username=max,password=<my password>,r mount error: could not resolve address for marina: No address associated with hostname No ip address specified and hostname not found and $ sudo smbmount //marina/resource_library /mnt/resource_library -o username=max,password=<my password>,r mount error: could not resolve address for marina: No address associated with hostname No ip address specified and hostname not found and both of the above with MARINA instead of marina. It's bound to be some dumb mistake i'm making, can anyone see it? cheers, max

    Read the article

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • OSS Router firmwares

    - by Cherian
    DD-WRT, Open WRT , Tomato or Third-party firmware projects ? What are the compelling reasons to choose between these? I used to be a great DD-WRT fan until I realized that the author was deceiving users by publishing it as a OSS, but made it very cumbersome to download the source and change it (requires you to download GB’s of source files) .Also their bandwidth monitoring feature was part of the paid version, which IMHO is a killer. Having said that, DD-WRT just worked. And I think that’s great..

    Read the article

  • Should I be worry about max number of files in a folder in *NIX filesystems?

    - by ??????
    In a social networking project we want to store user's avatars in a folder. I think in one year or two it'll reach to 140K (I've seen this issue before and it will be around this number). I want to spread files in folders. If a folder contains 1000 files then create another folder and do store files from 1001 to 2000. Is this a good approach or I'm just very cautious about the issue? (File system : EXT3)

    Read the article

  • tmpreaper, --protect and a non-root user

    - by nsg
    Hi, I'm a little confused. I have a download directory that I want to remove all files older then 30 days with tmpreaper. Just one problem, the directory in question is a separate partition with a lost+found directory, of course I need to keep it so I added --protect 'lost+found', the problem is that tmpreaper outputs: error: chdir() to directory 'lost+found' (inode 11) failed: Permission denied (PID 30604) Back from recursing down `lost+found'. Entry matching `--protect' pattern skipped. `lost+found' I have tried with other pattern like lost* and so on... I'm running tmpreaper as a non-root user because there is no reason for superuser privileges because I own all files (except lost+found). Are I'm forced to run tmpreaper as root? Or are my shell-skills not as good as I thought? I guess the problem is: tmpreaper will chdir(2) into each of the directories you've specified for cleanup, and check for files matching the <shell_pattern> there. It then builds a list of them, and uses that to protect them from removal. Any thought and/or advice? Edit: The command I'm trying to run is something like $ /usr/sbin/tmpreaper -t --protect 'lost+found' 30d /mydir 1> /dev/null error: chdir() to directory `lost+found' (inode 11) failed: Permission denied Edit 2: I read the source code for tmpreaper-1.6.13 and found this if (safe_chdir (dirname)) exit(1); and if (chdir (dirname)) { message (LOG_ERROR, "chdir() to directory `%s' (inode %lu) failed: %s\n", dirname, (u_long) sb1.st_ino, strerror (errno)); return 1; } So it seems tmpreaper needs to be able to chdir in to all directories, ignored or not. I see two options left Run tmpreaper as root Move the download directory Find a alternative tool (tmpwatch?) I will give it some more research before i make a choice.

    Read the article

  • Debian: SSH: "PermitRootLogin=forced-commands-only" stopped working

    - by Brent
    I have several servers running Debian Lenny. Just recently I discovered the PermitRootLogin=forced-commands-only directive for ssh, which allows me to run a scripted rsync as root with an ssl key, without enabling more generalized root ssh access. However, last week this stopped working - it appears on all of my servers - and I can't figure out why. Everything continues to work fine with PermitRootLogin=yes, but I would prefer to block root logins - especially via passwords. The day it stopped working, we reconfigured some of the ports on one of our switches (which we later reverted), but I can't see that affecting this, since it still works with PermitRootLogin set to yes. How can I diagnose why the forced-commands-only directive has apparently stopped working?

    Read the article

  • Openbox fails on login

    - by Sam
    X throws this error: Xsession: unable to launch "/usr/share/xsessions/openbox.xsession" Xsession --- "/usr/share/xsessions/openbox.xsession" not found; Falling back to default session. But a trip to the Command line reveals: sam@Aristotle:/usr/share/xsessions$ ls ... -rw-r--r-- 1 root root 164 2011-05-22 12:27 openbox.xsession ... I am using Ubuntu 12.04, openbox 3.5.0, and X.Org X Server 1.10.4

    Read the article

  • Gnome 3 screensaver (not suspend) resume hook?

    - by Daniel
    Since whenever my laptop screen resumes sleeping (i.e. wakes up from screensaving) some settings are reset, such as keyboard backlight, I'd like to run some scripts each time this happens. I'm using gnome 3 with fedora 17 by the way. When researching this issue I came across pm-utils which allows one to hook anything to events handled by pm-utils but it seems to me monitor sleeping (i.e. screensaving) is not one of them. Looks like pm-utils only handles suspend and hybernate. So is there a way to hook up custom programs to screensaver?

    Read the article

  • Iptables NAT logging

    - by Gerard
    I have a box setup as a router using Iptables (masquerade), logging all network traffic. The problem: Connections from LAN IPs to WAN show fine, i.e. SRC=192.168.32.10 - DST=60.242.67.190 but for traffic coming from WAN to LAN it will show the WAN IP as the source, but the routers IP as the destination, then the router - LAN IP. I.e. SRC=60.242.67.190 - DST=192.168.32.199 SRC=192.168.32.199(router) - DST=192.168.32.10 How do I configure it so that it logs the conversations correctly? SRC=192.168.32.10 - DST=60.242.67.190 SRC=60.242.67.190 DST=192.168.32.10 Any help appreciated, cheers

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >