Search Results

Search found 24755 results on 991 pages for 'linux mom'.

Page 411/991 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Add the "SAMBA File Server" role to a server running SCO Unix?

    - by I.T. Support
    We're trying to get network access to a hard drive on a server running SCO Unix from Windows Servers. I beleive we need to add the role "SAMBA File Server" to the server so we can mount the drive as a network share that we can access from Windows. Is it possible to add the SAMBA role to a SCO Unix operating system? Are there any gotchas or concerns? Thanks

    Read the article

  • (monit) What does failure "Changed" mean

    - by bresc
    Hi, I installed monit on my server and tried to monitor nginx. check process nginx with pidfile /var/run/nginx.pid start program = "/etc/init.d/nginx start" stop program = "/etc/init.d/nginx stop" group server And I get Process 'nginx' status Changed monitoring status monitored data collected Wed Mar 24 00:37:49 2010 What does "Changed" mean? I couldn't find anything. Thx

    Read the article

  • chroot for unsecure programs execution

    - by attwad
    Hi, I have never set-up a chroot-jailed environment before and I am afraid I need some help to do it well. To explain shortly what this is all about: I have a webserver to which users send python scripts to process various files that are stored on the server (the system is for Research purpose). Everyday a cron job starts the execution of the uploaded scripts via a command of this kind: /usr/bin/python script_file.py All of this is really insecure and I would like to create a jail in which I would copy the necessary files (uploaded scripts, files to process, python binary and dependencies). I already looked at various utilities to create jails but none of them seemed up-to-date or were lacking solid documentation (ie. the links proposed in How can I run an untrusted python script) Could anyone guide me to a viable solution to my problem? like a working example of a script that creates a jail, put some files in it and executes a python script? Thank you very much.

    Read the article

  • Xen virtual host can reach some sites but not others

    - by Tun H S Lee
    Okay, this is killing me. Debian Squeeze, Xen 4.0, brand new install. No iptables rules whatsoever except for the ones added by the default xen bridge script. Dom0 can reach the entire world, no problems. DomU can receive packets from some hosts, but not from others. For instance, if I ping Host A, it works fine. If I ping Host B, the DomU reports 100% packet loss. The hosts are random, but consistent (even after reboots). I can see no pattern to why some work and others don't. In fact, in some cases, different virtual hosts on the same server (an other server at a different data center) are divided; some work and others do not. I can reboot (DomU or Dom0 too) and the same hosts will work or fail as before. If I tcpdump on the Host B while pinging from the DomU, everything looks fine. It sees the echo request coming in and says it's sending one back. However, if I tcpdump peth0 on the Dom0, it never sees the echo reply. Any ideas what could be happening? I'm tearing my hair out here.

    Read the article

  • Ubuntu hardware compatability

    - by CT
    I have only previously played with ubuntu using virtual machines with VMware Fusion. So everything just sort of worked. I've never had to install any drivers. I'm considering putting it on some real hardware and using it as a media center. What should I be looking for as far as checking hardware compatibility? How does installing drivers work? Any quick and easy recommendations / guides?

    Read the article

  • amplified reflected attack on dns

    - by Mike Janson
    The term is new to me. So I have a few questions about it. I've heard it mostly happens with DNS servers? How do you protect against it? How do you know if your servers can be used as a victim? This is a configuration issue right? my named conf file include "/etc/rndc.key"; controls { inet 127.0.0.1 allow { localhost; } keys { "rndc-key"; }; }; options { /* make named use port 53 for the source of all queries, to allow * firewalls to block all ports except 53: */ // query-source port 53; /* We no longer enable this by default as the dns posion exploit has forced many providers to open up their firewalls a bit */ // Put files that named is allowed to write in the data/ directory: directory "/var/named"; // the default pid-file "/var/run/named/named.pid"; dump-file "data/cache_dump.db"; statistics-file "data/named_stats.txt"; /* memstatistics-file "data/named_mem_stats.txt"; */ allow-transfer {"none";}; }; logging { /* If you want to enable debugging, eg. using the 'rndc trace' command, * named will try to write the 'named.run' file in the $directory (/var/named"). * By default, SELinux policy does not allow named to modify the /var/named" directory, * so put the default debug log file in data/ : */ channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { /* This view sets up named to be a localhost resolver ( caching only nameserver ). * If all you want is a caching-only nameserver, then you need only define this view: */ match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; /* these are zones that contain definitions for all the localhost * names and addresses, as recommended in RFC1912 - these names should * ONLY be served to localhost clients: */ include "/var/named/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above :

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • Understanding connection tracking in iptables

    - by Matt
    I'm after some clarification of the state/connection tracking in iptables. What is the difference between these rules? iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT Is connection tracking turned on when a packet is first matched containing -m state --state BLA , or is connection tracking always on? Can/Should connection state be used for fast matching like below? e.g. suppose this is some sort of router/firewall (no nat). # Default DROP policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Drop invalid iptables -A FORWARD -m state --state INVALID -j DROP # Accept established,related connections iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow ssh through, track connection iptables -A FORWARD -p tcp --syn --dport 22 -m state --state NEW -j ACCEPT

    Read the article

  • Is there a way to rsync in batches?

    - by Chris
    I have a huge chunk of data (11G) in a subversion repository that I'm using rsync to migrate to Alfresco, which lucene indexes new files as they hit the file system. I'm using a dav mount as a proxy to allow me to rsync. The issue I'm having is the indexing post-rsync is quite an expensive operation for such a huge chunk of data, so I was wondering whether there's a way I could logically separate the rsync into identically-sized batches (say 500MB each) so I could schedule them in cron. At the moment, I'm traversing the top level folders and taking the smallest ones across first, but once I'm done with those, the much larger sub-directories are going to be quite troublesome. Please let me know if you need any further info. Thanks in advance.

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • Enabling Shell colours through Putty SSH

    - by Jon
    I have set a number of configurations in my .bashrc file to set the appearance of shell on my Redhat machine. However, when I login as root using Putty, the colours are not shown. I can enable them again by typing 'su', which simply puts me back to root like I was when I logged into putty, but that isn't exaclty ideal. Is there some configuration file or something I can use to enable shell colours when I login with Putty? Thanks

    Read the article

  • Sporadic unspecific kernel panic

    - by koma
    I'm experiencing seldom (so far about once a month) hard crashes on our ubuntu server 10.04 LTS box. The box itself is quite old (Dell PowerEdge 750 from 2004, Pentium4 2.8 GHz). I set up netconsole after it crashed twice last thursday and was able to extract the following output: [ 9354.062473] invalid opcode: 0000 [#1] SMP [ 9354.062516] last sysfs file: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/uevent [ 9354.062555] Modules linked in: ppdev adm1026 hwmon_vid i2c_i801 bridge stp dcdbas psmouse serio_raw netconsole configfs shpchp lp parport usbhid hid e1000 [ 9354.062685] [ 9354.062704] Pid: 3988, comm: rsync Not tainted 2.6.38-12-generic-pae #51~lucid1-Ubuntu Dell Computer Corporation PowerEdge 750 /0R1479 [ 9354.062773] EIP: 0060:[<c104fef1>] EFLAGS: 00010046 CPU: 1 [ 9354.062802] EIP is at check_preempt_wakeup+0x181/0x250 [ 9354.062826] EAX: 00000002 EBX: f2a10ccc ECX: 00000000 EDX: 00000002 [ 9354.062850] ESI: f1db71cc EDI: f1db71a0 EBP: f1dbdea8 ESP: f1dbde8c [ 9354.062875] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 [ 9354.062900] Process rsync (pid: 3988, ti=f1dbc000 task=f1db71a0 task.ti=f1dbc000) [ 9354.062933] Stack: [ 9354.062951] 0053ea60 f7907680 f28da840 f2a10ca0 c153ea60 f7907680 c153ea60 f1dbdebc [ 9354.063019] c103f98a f2a10ca0 f7907680 00000001 f1dbdef8 c104f97f 00000000 f2f0bacc [ 9354.063088] f7904338 00000001 00000003 00000000 f2f0bacc 00000001 00000001 00000086 [ 9354.063157] Call Trace: [ 9354.063183] [<c103f98a>] check_preempt_curr+0x6a/0x80 [ 9354.063210] [<c104f97f>] try_to_wake_up+0x5f/0x3f0 [ 9354.063236] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.063261] [<c104fd64>] wake_up_process+0x14/0x20 [ 9354.063286] [<c1077a1d>] hrtimer_wakeup+0x1d/0x30 [ 9354.063310] [<c1077f4a>] __run_hrtimer+0x7a/0x1c0 [ 9354.063336] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.063360] [<c1078310>] hrtimer_interrupt+0x120/0x2b0 [ 9354.063390] [<c1535c36>] smp_apic_timer_interrupt+0x56/0x8a [ 9354.063418] [<c152f459>] apic_timer_interrupt+0x31/0x38 [ 9354.063446] [<c1520000>] ? mca_attach_bus+0x5/0xc0 [ 9354.063469] Code: 8b 9b 20 01 00 00 8b 86 24 01 00 00 3b 83 24 01 00 00 75 e6 85 db 0f 84 a3 00 00 00 89 da 89 f0 e8 75 f6 fe ff 83 f8 01 0f 85 00 <fe> ff ff 89 f8 e8 95 f9 fe ff 8b 5e 1c 85 db 0f 84 e4 fe ff ff [ 9354.063804] EIP: [<c104fef1>] check_preempt_wakeup+0x181/0x250 SS:ESP 0068:f1dbde8c [ 9354.064231] ---[ end trace 290689cea65aea7f ]--- [ 9354.064290] Kernel panic - not syncing: Fatal exception in interrupt [ 9354.064352] Pid: 3988, comm: rsync Tainted: G D 2.6.38-12-generic-pae #51~lucid1-Ubuntu [ 9354.064424] Call Trace: [ 9354.064481] [<c152c057>] ? panic+0x5c/0x15b [ 9354.064539] [<c15302bd>] ? oops_end+0xcd/0xd0 [ 9354.064539] [<c100d9e4>] ? die+0x54/0x80 [ 9354.064539] [<c152f926>] ? do_trap+0x96/0xc0 [ 9354.064539] [<c100ba00>] ? do_invalid_op+0x0/0xa0 [ 9354.064539] [<c100ba8b>] ? do_invalid_op+0x8b/0xa0 [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c144884d>] ? __kfree_skb+0x3d/0x90 [ 9354.064539] [<c1042ae7>] ? update_curr+0x247/0x2a0 [ 9354.064539] [<c10447bb>] ? update_cfs_load+0x11b/0x2d0 [ 9354.064539] [<c1042a25>] ? update_curr+0x185/0x2a0 [ 9354.064539] [<c152f6bf>] ? error_code+0x67/0x6c [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c103f98a>] ? check_preempt_curr+0x6a/0x80 [ 9354.064539] [<c104f97f>] ? try_to_wake_up+0x5f/0x3f0 [ 9354.064539] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.064539] [<c104fd64>] ? wake_up_process+0x14/0x20 [ 9354.064539] [<c1077a1d>] ? hrtimer_wakeup+0x1d/0x30 [ 9354.064539] [<c1077f4a>] ? __run_hrtimer+0x7a/0x1c0 [ 9354.064539] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.064539] [<c1078310>] ? hrtimer_interrupt+0x120/0x2b0 [ 9354.064539] [<c1535c36>] ? smp_apic_timer_interrupt+0x56/0x8a [ 9354.064539] [<c152f459>] ? apic_timer_interrupt+0x31/0x38 [ 9354.064539] [<c1520000>] ? mca_attach_bus+0x5/0xc0 Googling for this issue didn't really turn up anything useful (most stuff I found was related to btrfs, but I don't use that, although the module exists and is sometimes loaded). From experience it might have to do with relatively heavy I/O, as two of the panics happened during a backup procedure. Kernel is 2.6.38-12-generic-pae, but I'm pretty sure I also saw panics on 2.6.32. I meanwhile upgraded to 3.0.0-17-generic-pae and am waiting for the next crash ;-) I'm at a loss here, so any pointers where to look for the cause or what it could be would be great :-) Thanks !

    Read the article

  • Can only bring up one of two interfaces

    - by mstaessen
    I'm having a bizarre issue with my HP Proliant DL 360 G4p server. It has two gigabit ethernet interfaces but I can bring up only one of them. This is starting to freak me out and that's why I turned here. I'm running the x64 ubuntu 11.10 server edition. lshw -c network shows that the second interface is disabled. I have no idea why ans how to enable it. $ sudo lshw -c network *-network:0 description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2 bus info: pci@0000:02:02.0 logical name: eth0 version: 10 serial: 00:18:71:e3:6d:26 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 duplex=full firmware=5704-v3.27b, ASFIPMIc v2.36 ip=10.48.8.x latency=64 link=yes mingnt=64 multicast=yes port=twisted pair speed=100Mbit/s resources: irq:25 memory:fdf70000-fdf7ffff *-network:1 DISABLED description: Ethernet interface product: NetXtreme BCM5704 Gigabit Ethernet vendor: Broadcom Corporation physical id: 2.1 bus info: pci@0000:02:02.1 logical name: eth1 version: 10 serial: 00:18:71:e3:6d:25 capacity: 1Gbit/s width: 64 bits clock: 66MHz capabilities: pcix pm vpd msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.119 firmware=5704-v3.27b latency=64 link=no mingnt=64 multicast=yes port=twisted pair resources: irq:26 memory:fdf60000-fdf6ffff If I try to ifup eth1, then I get $ sudo ifup eth1 Ignoring unknown interface eth1=eth1. I figured that's what happens when there is no eth1 listed in /etc/network/interfaces. But when I add the configuration for eth1, I still can't ifup. $ sudo ifup eth1 RTNETLINK answers: File exists Failed to bring up eth1. I've also tried ifconfig eth1 up but without any result. For clarity, I have added a masked version of /etc/network/interfaces. I don't think it is the cause of the problem though. $ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 10.48.8.x netmask 255.255.255.y network 10.48.8.z broadcast 10.48.8.t gateway 10.48.8.u auto eth1 iface eth1 inet static address 193.190.253.x netmask 255.255.255.y network 193.190.253.z broadcast 193.190.253.t gateway 193.190.253.u I really need some help fixing this. It's driving me crazy. Thanks.

    Read the article

  • having 2 ip's on a debian 7 box

    - by David
    I just installed Debian Wheezy on my homeserver. I want to assign 2 ip's to it on the same network interface, 1 static ip (eth0) and 1 dynamic ip (eth0:1). I know it doesn't make much sense but I need it to test something. I edited my /etc/network/interfaces to be like this: auto lo eth0 eth0:1 iface lo inet loopback iface eth0 inet static address 192.168.178.240 network 192.168.178.0 netmask 255.255.255.0 broadcast 192.168.178.255 gateway 192.168.178.1 iface eth0:1 inet dhcp when I bring up eth0:1 (ifup eth0:1) I get the following error (eth0 works fine) Bind socket to interface: No such device Failed to bring up eth0:1. is it even possible to have a dynamic and static ip on the same network adapter?

    Read the article

  • sed syntax to remove xml

    - by mjb
    I'm trying to sanitize this output from it's metadata to plug this output into GreekTools, but I am getting stuck on sed. curl --silent www.brainyquote.com | egrep '(span class="body")|(span class="bodybold")' | sed -n '6p; 7p; ' | sed 's/\<*\>//g' [ex] <span class="body">Literature is news that stays news.</span><br> <span class="bodybold">Ezra Pound</span> Could someone help me along on this track?

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Kickstart Partitioning Configuration

    - by Flo
    I'be been trying to run a kickstart script with the following partition configuration: #Clear the masterboot record zerombr bootloader --location=mbr --driveorder=sda --append=" rhgb crashkernel=auto quiet" # Set up the partitions/logical volumes/logical groups clearpart --all part /boot --fstype=ext4 --asprimary --size=512 --ondisk=sda part swap --size=2048 --fstype=swap --ondisk=sda part pv.01 --fstype=ext4 --grow --size=200 --ondisk=sda part pv.02 --fstype=ext4 --grow --size=200 --ondisk=sdb volgroup VolGroup pv.01 pv.02 --pesize=32768 logvol /opt --fstype=ext4 --name=opt.fs --vgname=VolGroup --size=40000 logvol / --fstype=ext4 --name=root.fs --vgname=VolGroup --size=78000 I have two hard drives and it looks to me like its a really simple configuration. When I run the kickstart I keep getting all these errors that have to do with python files for configuring partitions. The only actual maybe useful piece of information is KeyError /dev/sda/ I tried a number of alterations of this configuration but nothing really worked. Any ideas?

    Read the article

  • What is the best vfat driver for FUSE?

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. One is 404 not found, others is old, not buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

  • How can I change the flow through this PAM (programmable authentication module) file?

    - by Jamie
    I'd like the PAM module to skip the pam_mount.so line when a unix login succeeds. I've tried various things including: auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=2 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth requisite pam_permit.so auth required pam_permit.so auth optional pam_mount.so But can't get it to work. Conversely, when a session shuts down, how can I modify the following os that an unmount command (via pam_mount.so) is avoided during a unix login? session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session required pam_unix.so session optional pam_winbind.so session optional pam_mount.so

    Read the article

  • MTD mtd3ro backup returns BCH decoding failed

    - by saeed144
    While doing a kernel backup of an mtd (Memory Technology Device) from /dev/mtd/mtd3ro of a TI board gives many "BCH decoding failed", Here are system info #cat /proc/mtd dev: size erasesize name mtd0: 00080000 00020000 "X-Loader" mtd1: 00140000 00020000 "U-Boot" mtd2: 000c0000 00020000 "U-Boot Env" mtd3: 00500000 00020000 "Kernel" mtd4: 1f880000 00020000 "File System" here is the method used, dd if=/dev/mtd/mtd3ro of=/data/local/tmp/mtd3.bin doing a cat also returns the same error, and here is the error, BCH decoding failed BCH decoding failed yes, the destination has enough space ;) tell me what do you think? Thanks

    Read the article

  • Primary/secondary ethernet interfaces in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local...

    Read the article

  • Explanation of nodev and nosuid in fstab

    - by Ivan Kovacevic
    I see those two options constantly suggested on the web when someone describes how to mount a tmpfs or ramfs. Often also with noexec but I'm specifically interested in nodev and nosuid. I basically hate just blindly repeating what somebody suggested, without real understanding. And since I only see copy/paste instructions on the net regarding this, I ask here. This is from documentation: nodev - Don't interpret block special devices on the filesystem. nosuid - Block the operation of suid, and sgid bits. But I would like a practical explanation what could happen if I leave those two out. Let's say that I have configured tmpfs or ramfs(without these two mentioned options set) that is accessible(read+write) by a specific (non-root)user on the system. What can that user do to harm the system? Excluding the case of consuming all available system memory in case of ramfs

    Read the article

  • Co-worker told me ubuntu server LTS is not built for production env

    - by Sandro Dzneladze
    I was confronted with a very vague statement today, but unfortunately I had no clue how to respond or analyse this. Co-worker told me ubuntu server LTS is not built for production env. What is there that makes it not built for production environment? (obviously no answer followed, so this is my question to you...) is it ...packages? ...ease of use? ...supported systems? ...stability? ...support? ...popularity?

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >