Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 336/1048 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Openldap with ppolicy

    - by nitins
    We have working installation of OpenLDAP version 2.4 which is using shadowAccount attributes. I want to enable ppolicy overlays. I have gone through the steps provided at OpenLDAP and ppolicy howto. I have made the changes to slapd.conf and imported the password policy. On restart OpenLDAP is working fine and I can see the password policy when I do a ldapsearch. The user object looks like given below. # extended LDIF # # LDAPv3 # base <dc=xxxxx,dc=in> with scope subtree # filter: uid=testuser # requesting: ALL # # testuser, People, xxxxxx.in dn: uid=testuser,ou=People,dc=xxxxx,dc=in uid: testuser cn: testuser objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount shadowMax: 90 shadowWarning: 7 loginShell: /bin/bash uidNumber: 569 gidNumber: 1005 homeDirectory: /data/testuser userPassword:: xxxxxxxxxxxxx shadowLastChange: 15079 The password policy is given below. # default, policies, xxxxxx.in dn: cn=default,ou=policies,dc=xxxxxx,dc=in objectClass: top objectClass: device objectClass: pwdPolicy cn: default pwdAttribute: userPassword pwdMaxAge: 7776002 pwdExpireWarning: 432000 pwdInHistory: 0 pwdCheckQuality: 1 pwdMinLength: 8 pwdMaxFailure: 5 pwdLockout: TRUE pwdLockoutDuration: 900 pwdGraceAuthNLimit: 0 pwdFailureCountInterval: 0 pwdMustChange: TRUE pwdAllowUserChange: TRUE pwdSafeModify: FALSE I do not what should be done after this. How can the shadowAccount attributes be replaced with the password policy.

    Read the article

  • Upgraded from fc10 to fc12 now I have eth0_rename, how do I get back to plain old eth0?

    - by shank
    I upgraded from Fedora 10 to Fedora 12. Unfortunately, my ethernet interface eth0 is now named eth0_rename. I'd like to get back to having it named plain old eth0. I googled a bit but the solution of removing the eth0 entry from /etc/udev/rules.d/70-persistent-net.rules seems to have no effect (I restarted the network service but didn't reboot). The interface works just fine although I could see a script or two having a problem with the format. So, it's more of an inconvenience thing than anything else. Any ideas? Thanks.

    Read the article

  • How can I bind mount a directory with a space in it?

    - by chris
    I have a drive mounted at /media/ that contains a directory with a space in the name - let's call it "My Stuff". I would like to bind mount it to "My Stuff" in my home directory. I tried the following in fstab, but all attempts to mount resulted in a syntax error: /media/My\ Stuff /home/me/My\ Stuff none bind "/media/My\ Stuff" "/home/me/My\ Stuff" none bind "/media/My Stuff" "/home/me/My Stuff" none bind Is there a way to do this?

    Read the article

  • VMXNET3 nic loses the ability to update ARP table after N hours

    - by Peter
    I have a fedora 18 VM that stops updating the arp table on eth1 after running for a number of hours to days. There are other VMs on the same hypervisor that can access all of the same networks without issue. A tcpdump of the offending NIC shows only ARP broadcasts but no responses. None of the other VMs on the vDS see the ARP broadcasts from the offending NIC. The only way I can currently solve the problem is to reboot the VM and then everything works for a while. I've tried changing the port on the vDS and even flipping the network configurations after I lose eth1's ARP table, but the ARP problem follows eth1 but I can access the machines that were originally on eth1. If I statically add the arp entries for machines on the same subnet I have no problems with connectivity. The Hypervisor is an HP BL49X series with flex-10 network modules. Has anyone seen anything like this before?

    Read the article

  • VIM zsh, bash and colors in command line on Ubuntu

    - by Jacek Wysocki
    I have problem with VIM command line when calling system commands. e.g. !ls, all command output colors aren't parsed by VIM. My system is Ubuntu 12.04 LTS with VIM 7.3.429 from Ubuntu repositories. Is there any workaround for this problem? EDIT: My vimrc file :!echo $TERM in VIM returns : dumb EDIT2: I found a simple workaround but it's not perfect if [ "$VIM" ] && [ "$TERM" = "dumb" ] then # For gvim's monochromatic :shell PS1='\n\u@\h \w\n\$ ' unalias ls unalias grep fi (It's working on bash)

    Read the article

  • GNU screen - Unable to reattach to screen after lost connection

    - by subhashish
    I was using irssi in screen but lost connection. After I ssh'd back in to the server, I can no longer attach to that screen. screen -ls shows that the screen is already attached. I tried screen -D to force detach it, and it said detach but screen -ls still says it's attached. I tried screen -x and it just hangs there. [sub@server ~]$ screen -ls There are screens on: 4033.poe (Detached) 7728.irssi (Attached) 2 Sockets in /var/run/screen/S-sub. What can I do now?

    Read the article

  • smartctl or hddtemp for xvda [on hold]

    - by HST
    I'm trying to check the state of the drives on a remote server running Debian wheezy. I'm using a software RAID10 on top of, I guess, xen, since the entries in /dev are /dev/xvda and /dev/xvdb But it I try smartctl -a /dev/xvda I get /dev/xvda: Unable to detect device type Smartctl: please specify device type with the -d option. I've tried various device type guesses, none work Similar problem with hddtemp, which reports ERROR: /dev/xvda: can't determine bus type (or this bus type is unknown) I've searched the smartmontools documentation, but can't find any discussion of virtual disks. . . How do I get behind the virtualisation to something smart tools or hddtemp can work with?

    Read the article

  • Dovecot, Postfix, Postfixadmin - can't send/receive mail

    - by Jack
    I am setting up a mail server: Dovecot and Postfix with MySQL support and Postfixadmin. Spend literally all day trying to figure it out, but I'm still unable to neither send nor receive any emails. To my knowledge, I have configured everything correctly, so either there is another problem, or my knowledge isn't good enough. Here is what I get when I use "echo test | mail [email protected]:" Jul 11 00:41:07 server postfix/pickup[17999]: 5B0D32AE1B: uid=0 from= Jul 11 00:41:07 server postfix/cleanup[19444]: 5B0D32AE1B: message-id=<[email protected] Jul 11 00:41:07 server postfix/qmgr[18513]: 5B0D32AE1B: from=, size=329, nrcpt=1 (queue active) Jul 11 00:41:12 server postfix/smtp[19448]: 5B0D32AE1B: to=, relay=none, delay=5.3, delays=0.1/0.01/5.2/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=dsa.com type=MX: Host not found, try again) *@mail.asd.com is changed for privacy reasons, same goes for [email protected]. *The bold text is where it, for some reason, prints out dsa.com - even though I haven't found it anywhere in the files which I've edited during the installation, nor my DNS is .com in the first place. Here is what I get when I try to send out an email from Postfix Admin interface: Jul 11 00:49:08 server postfix/smtpd[19479]: connect from localhost[127.0.0.1] Jul 11 00:49:08 server postfix/trivial-rewrite[19484]: warning: do not list domain asd.com in BOTH mydestination and virtual_mailbox_domains Jul 11 00:49:08 server postfix/smtpd[19479]: 4F7892AE1E: client=localhost[127.0.0.1] Jul 11 00:49:08 server postfix/cleanup[19487]: 4F7892AE1E: message-id=<[email protected] Jul 11 00:49:08 server postfix/qmgr[18513]: 4F7892AE1E: from=, size=317, nrcpt=1 (queue active) Jul 11 00:49:08 server postfix/smtpd[19479]: disconnect from localhost[127.0.0.1] Jul 11 00:49:10 server postfix/smtpd[19492]: connect from localhost[127.0.0.1] Jul 11 00:49:10 server postfix/trivial-rewrite[19484]: warning: do not list domain asd.com in BOTH mydestination and virtual_mailbox_domains Jul 11 00:49:10 server postfix/smtpd[19492]: 743AE2AE1F: client=localhost[127.0.0.1] Jul 11 00:49:10 server postfix/cleanup[19487]: 743AE2AE1F: message-id=<[email protected] Jul 11 00:49:10 server postfix/qmgr[18513]: 743AE2AE1F: from=, size=772, nrcpt=1 (queue active) Jul 11 00:49:10 server postfix/smtpd[19492]: disconnect from localhost[127.0.0.1] Jul 11 00:49:10 server amavis[13437]: (13437-11) Passed CLEAN, LOCAL [127.0.0.1] - , Message-ID: <[email protected], mail_id: 86+KQY93ANel, Hits: -0.002, size: 317, queued_as: 743AE2AE1F, 2145 ms Jul 11 00:49:10 server postfix/smtp[19489]: 4F7892AE1E: to=, relay=127.0.0.1[127.0.0.1]:10024, delay=2.3, delays=0.17/0.01/0/2.1, dsn=2.0.0, status=sent (250 2.0.0 from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 743AE2AE1F) Jul 11 00:49:10 server postfix/qmgr[18513]: 4F7892AE1E: removed I really don't know what might be the problem... If you need to know something, feel free to ask and I'll clarify something.

    Read the article

  • How many users are "many users"?

    - by kemp
    I need to find a solution for a website which is struggling under load. The site gets ~500 simultaneous connections during peak time, and counts around 42k hits per day. It's a wordpress based site bridged with a vbulletin forum with a lot of contents and a fairly complex structure which makes intensive use of the database. I already implemented code level full page caching (without this the server just crashes), and configured all other caching directives as well as combining css files and the like to limit http requests as much as possible. I need to understand if there is more that can be done via software or if the load is just too much for the server to handle and it needs to be upgraded, because the server goes down occasionally during peak times. Can't access the server now, but it's a dedicated CentOS machine (I think 4GB ram, can't say what CPU) running apache/mysql. So back to the main question: how can I know when the users are just too many?

    Read the article

  • Diagnosing SAN connectivity issues (RHEL5)

    - by Matthew
    We are currently utilizing GFS2 to share a SAN LUN between 3 servers. However due to a feature problem with vendor software we are using, we currently have the volume unmounted on two of the boxes, and are instead exporting the GFS2 filesystem via NFS from the first one (the software requires some weird locking mechanics that GFS2 doesn't support). As of this morning, NFS was no longer able to read/write to the volume from any of the servers, including the NFS server. I then tried checking the normal mount (the directory that is exported on the NFS server) and I received a weird input/output error just trying to CD into it. When I tried running multipath, I got a DM error, however multipath -l worked just fine. I tried unmounting the GFS2 volume, and the CLI hung. I ran init 0 which killed most services, but then the shutdown appeared to have been hung. I logged in via out of band access (hp ILO) and saw that the shutdown was hung trying to unmount GFS2 volumes. My main priority was getting the box back online so after about 5 minutes of waiting I did a hard reset. I am now trying to figure out what went wrong. What are the correct logs to investigate? I've never run into SAN issues like this before. The SAN is connected via 2 fibre connections. Any help would really be appreciated. Everything appears to be up and functional now.

    Read the article

  • Screen -X exec commands not working until manually attached

    - by James Watt
    I have a batch script that starts a java server application inside of a screen. The command looks like this: cd /dir/ && screen -A -m -d -S javascreen java -Xms640M -Xmx1024M -jar javaserverapp.jar nogui After I run the batch script, it starts the server and puts it inside the correct screen. If I list my screens after, I see something like this: user@gtwy /dir $ screen -list There is a screen on: 16180.javascreen (Detached) 1 Socket in /var/run/screen/S-user. However, I have a second batch script that sends automated commands to this server and runs on a different crontab interval. Because of the way the application works, I send commands to it like this (this command tells it to alert connected users "testing 123"): screen -X exec .\!\! echo say testing 123 I've also tried: screen -R -X exec .\!\! echo say testing 123 screen -S javascreen -X exec .\!\! echo say testing 123 Unfortunately, these commands DO NOT WORK. They don't even give me an error message, they just do nothing. HOWEVER - If I manually attach to the screen first (with the below command) and then detach, now I can run any of the above commands flawlessly. I can demonstrate this with a video, if I wasn't clear enough here. screen -r -d Thanks in advance. Update: here is the important parts of /etc/screenrc. It should be totally vanilla, I've never edited this file. # VARIABLES # =============================================================== # No annoying audible bell, using "visual bell" # vbell on # default: off # vbell_msg " -- Bell,Bell!! -- " # default: "Wuff,Wuff!!" # Automatically detach on hangup. autodetach on # default: on # Don't display the copyright page startup_message off # default: on # Uses nethack-style messages # nethack on # default: off # Affects the copying of text regions crlf off # default: off # Enable/disable multiuser mode. Standard screen operation is singleuser. # In multiuser mode the commands acladd, aclchg, aclgrp and acldel can be used # to enable (and disable) other user accessing this screen session. # Requires suid-root. multiuser off # Change default scrollback value for new windows defscrollback 1000 # default: 100 # Define the time that all windows monitored for silence should # wait before displaying a message. Default 30 seconds. silencewait 15 # default: 30 # bufferfile: The file to use for commands # "readbuf" ('<') and "writebuf" ('>'): bufferfile $HOME/.screen_exchange # # hardcopydir: The directory which contains all hardcopies. # hardcopydir ~/.hardcopy # hardcopydir ~/.screen # # shell: Default process started in screen's windows. # Makes it possible to use a different shell inside screen # than is set as the default login shell. # If begins with a '-' character, the shell will be started as a login shell. # shell zsh # shell bash # shell ksh shell -$SHELL # shellaka '> |tcsh' # shelltitle '$ |bash' # emulate .logout message pow_detach_msg "Screen session of \$LOGNAME \$:cr:\$:nl:ended." # caption always " %w --- %c:%s" # caption always "%3n %t%? @%u%?%? [%h]%?%=%c" # advertise hardstatus support to $TERMCAP # termcapinfo * '' 'hs:ts=\E_:fs=\E\\:ds=\E_\E\\' # set every new windows hardstatus line to somenthing descriptive # defhstatus "screen: ^En (^Et)" # don't kill window after the process died # zombie "^["

    Read the article

  • iptables dos limit for all ports

    - by user973917
    I know how to use limit conntrack option to allow for DoS protection. However, I want to add a protection to limit no more than say 50 connections for each port. How can I do this? Basically, I want to make sure that each port can have no more than 50 connections, rather than globally applying 50 connections (which is what #2 does I believe?) Would I do something like: iptables -A INPUT --dport 1:65535 -m limit --limit 50/minute --limit-burst 50 -j ACCEPT or iptables -A INPUT -m limit --limit 50/minute --limit-burst 50 -j ACCEPT

    Read the article

  • How can I repair my USB drive?

    - by yurko
    USB drive is in read only state and I can't repair it. First of all I tried erase it using dd: root@yurko-laptop:/home/yurko-laptop# ls -l /dev/disk/by-id | grep usb lrwxrwxrwx 1 root root 9 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0 -> ../../sdb lrwxrwxrwx 1 root root 10 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0-part1 -> ../../sdb1 root@yurko-laptop:/home/yurko-laptop# dd if=/dev/zero of=/dev/sdb dd: ?????? ? «/dev/sdb»: ?? ?????????? ????????? ????? 8257537+0 ??????? ??????? 8257536+0 ??????? ???????? ??????????? 4227858432 ????? (4,2 GB), 942,633 c, 4,5 MB/c After that I wanted to create new filesystem using fdisk: root@yurko-laptop:/home/yurko-laptop# fdisk /dev/sdb You will not be able to write the partition table. WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdb: 4227 MB, 4227858432 bytes 4 heads, 63 sectors/track, 32768 cylinders Units = cylinders of 252 * 512 = 129024 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 18 32768 4126596 b W95 FAT32 Command (m for help): fdisk showed that the partition still exists and I can't write the partition table. I tried to delete the existing partition: Command (m for help): d Selected partition 1 Command (m for help): w Unable to write /dev/sdb root@yurko-laptop:/home/yurko-laptop# Why am I not be able to write the partition table? Does it mean that some hardware failure occurred? And is it possible to repair the current USB drive? I've tried to use hdparm and it showed that the readonly flag is on: root@yurko-laptop:/home/yurko-laptop# hdparm /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0a 00 00 00 00 26 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 multcount = 0 (off) readonly = 1 (on) readahead = 256 (on) geometry = 1016/131/62, sectors = 8257536, start = 0

    Read the article

  • How do you manage perl modules on a Debian system?

    - by nagul
    I'd like to know if you have a method for managing perl modules on your Debian system, with respect to the following: Installing new modules Listing of manually installed modules Checking dependencies, and uninstalling modules I have looked at this perlmonks article for background reading: What is the best way to install CPAN modules on Debian? I have previously installed perl modules using the CPAN module. I have also used dh-make-perl in some cases, when following instructions to build other packages that had perl dependencies. I'd like to institute a coherent policy on my machine so I can better manage how and where the modules are installed, and reduce the chance of breaking perl on my system. I would strongly like a system where I can detect and uninstall modules that are no longer being used.

    Read the article

  • What is the kernel module "hid_microsoft"?

    - by Masi
    The lsmod-command shows the odd module. The command "modprobe -a hid_microsoft" does not reveal anything. What is it? $ modinfo hid_microsoft filename: /lib/modules/2.6.28-15-generic/kernel/drivers/hid/hid-microsoft.ko license: GPL srcversion: 3FE2E2F2EE89174E885204A alias: hid:b0005v0000045Ep00000701 alias: hid:b0003v0000045Ep0000009D alias: hid:b0003v0000045Ep00000713 alias: hid:b0003v0000045Ep000000F9 alias: hid:b0003v0000045Ep000000DB alias: hid:b0003v0000045Ep0000003B depends: vermagic: 2.6.28-15-generic SMP mod_unload modversions 586

    Read the article

  • Is there any way to use the xscreensaver gltext to monitor system information?

    - by Shane
    I am trying to get the X screensaver gltext to monitor my system temp and maybe some other stats. Writing a script that puts together the stats periodically is no problem, but the main thing I'm running into is that gltext doesn't refresh - whatever text I feed it stays there. So, for example, I run this command: $ /usr/lib/xscreensaver/gltext -text "`cat /proc/acpi/thermal_zone/THRM/temperature`" and get a gltext window showing: temperature: 60 C I can manipulate it and format it as necessary, but as my CPU heats up and cools down, it doesn't update, even if I include time variables that do. I have a script that feeds gltext the time as the first line and the temp as the second line - and although the time updates continuously as the time changes, the temperature value remains the same as whenever the screensaver started. Is it possible to do what I want, which is change the text every 60 seconds if the temperature changes?

    Read the article

  • USB resets with Ubuntu 9.10

    - by Grumbel
    Since the upgrade to Ubuntu 9.10 I have issues with getting USB device resets on my Maxtor OneTouch USB harddrive: Nov 9 20:54:37 localhost kernel: [32459.100021] usb 2-2: reset high speed USB device using ehci_hcd and address 4 Nov 9 21:54:37 localhost kernel: [36059.100017] usb 2-2: reset high speed USB device using ehci_hcd and address 4 Nov 9 23:24:37 localhost kernel: [41459.112025] usb 2-2: reset high speed USB device using ehci_hcd and address 4 The device itself continues to work fine, the resets however wake the device out of its sleep state and thus cause it to spin up, which is very annoying. Interestingly, as the log shows, the resets happen at pretty regular intervals (i.e. one hour or half an hour), not randomly. An USB card reader seems to have the same issues, while another USB harddrive from a different manufactor works fine on the same PC. What could be causing this and how could I fix it?

    Read the article

  • Can't FTP into server

    - by Roland
    I need to FTP in from one server to another If I FTP using my local PC using Krusader I'm able to FTP into the server but if I ssh into one server and I'm trying to FTP to the server using the same ftp credentials I get message [Resolving host address...] I know this address is correct since I can ping it from the server I use the following command lftp 'open -u username,password server' If I use the same command to ftp to a different server it works. Any help advise will be greatly appreciated.

    Read the article

  • Why do password entries over ssh take so long?

    - by Dean
    When I'm ssh'd into my server, any time I enter my password, there's a 40 second delay before the server responds. This occurs when logging in, as well as whenever I run a command via sudo. The delay does not happen when I run su and enter my password however. Using the -v flag for ssh doesn't show anything during this time. Looking at Wireshark, all traffic between the two machines stops while this is happening. Any idea what's happening, or advice on how to investigate this? The server is running Debian squeeze (6.0.4)

    Read the article

  • sg_map & lsscsi showing old storage version

    - by PratapSingh
    I am using SUN storage and recently upgraded/refreshed my ISCSI LUN storage. We have replicated old storage to new storage and attached to our servers. I can see at SUN storage side that storage is attached to server and also from server when I run the below command it prints the following output : iscsiadm -m session tcp: [1] 10.1.1.10:3260,2 iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd The above storage is SUN STORAGE 7420 But when I run sg_map or lsscsi command it prints different version: lsscsi disk SUN Sun Storage 7410 1.0 /dev/sda disk SUN Sun Storage 7410 1.0 /dev/sdb disk SUN Sun Storage 7410 1.0 /dev/sdc disk SUN Sun Storage 7410 1.0 /dev/sdd Output of ls on "/dev/disk/by-path/" ls -1 /dev/disk/by-path/ ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-0 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-0-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-18 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-18-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-2 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-2-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-4 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-4-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-6 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-6-part1 I have rebooted server twice but still I am getting the same output as given above.

    Read the article

  • Ganglia and how it communicates

    - by MikeKulls
    I'm a little confused about how Ganglia sends information around and have found conflicting information on the web. I would have thought that the gmond process would either send info to gmetad at a regular interval or gmetad would request info from the various instances of gmond. At least one online article states this is how it works but from what I understand this is incorrect. It appears that you configure all gmond processes to send their info to a central gmond process and gmetad will request info from that central gmond. Is that correct? In my case I have 6 servers sending their information to 1 central server. If I set gmetad to get it's information from the central server then I get information from all 6 servers, all good.. Their weird thing is that if I point gmetad to one of the 6 servers then I still get info from all 6 servers. How is it that server 1 in my cluster if getting stats from all the other servers?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >