Search Results

Search found 14709 results on 589 pages for 'root permission'.

Page 393/589 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • How do I access files inside a Wubi virtual ext4 Ubuntu partition from within Windows?

    - by aalaap
    I just installed Ubuntu 10.04 using Wubi on a PC that has Windows XP and Windows 7 installed. I was working in it for a while and everything is just fine. However, when I booted back into Windows 7, I couldn't figure out a way to access the files I had created or downloaded into the Ubuntu partition. They're in a virtual disk called root.disk in my C:\ubuntu\disks. Is there a way I can mount this vhd into Windows or at least browse the contents and extract what I need?

    Read the article

  • Can two Linux installations share the same /home partition?

    - by huahsin68
    I am currently using OpenSuse 11.4 and Windows XP in laptop. I was planning to remove the Windows and switch to install Kubuntu. My current situation is that I have my root (/) and /home partition separated in OpenSuse. Can I share the /home partition between OpenSuse and Kubuntu? How do I configure Kubuntu to use the existing /home partition during the installation? BTW, the most recent Kubuntu is using ext3 file system whereas my OpenSuse is using ext3. Will this a matter for me to install Kubuntu? Any other issue I need to take care of?

    Read the article

  • SugarCRM CE Won't Install on Ubuntu 10.10

    - by Trenton Scott
    I have a fresh copy of Ubuntu 10.10 server with a working LAMP installation. I downloaded SugarCRM and browsed to its directory to open the installer (via Firefox). The installer appears fine, I accept the license agreement, and it proceeds to check file permissions. It advises that several directories need looser permissions (chmod 766), and I adjust them accordingly. After making the changes, I click "recheck" and the page just reloads as blank (white). There are no errors visible, nothing in the server logs (Apache/PHP) and installation cannot continue. I'm able to get back to the installation tool by readjusting permissions back to my default (0755 for directories, 0644 for files). All files/folders are owned by root and the www-data group. Any idea about what's wrong?

    Read the article

  • Is there a way to disable specific Spybot Immunization rules?

    - by Iszi
    I've been having problems using a desktop sharing application, which I've traced to the Immunization protections applied via Spybot S&D. Specifically, the problem has been narrowed down to the rules in the \SOFTWARE (Plugins) categories under the Internet Explorer groups. Once I disable these Immunization categories, everything in the application works fine. Each of these categories appears to include ~900 protections on the system. I suspect that the root cause of my problems could be narrowed down to just one, or perhaps a handful, of the settings that get applied in these categories. However, I can't find any options in Spybot S&D which would allow me to drill down to the individual protection rules and choose which to enable or disable. Is there something I'm missing, or is this not a feature available via the GUI? If it's not strictly supported in the application, is there a way to work around it by manually editing some of its files or registry settings? Spybot S&D version: 2.2.21.0 Spybot Start Center version: 2.2.21.129 Windows Ultimate x64

    Read the article

  • Reenabling the Spotlight Menubar item in Mac OS X 10.6

    - by Tim Visher
    I believe I followed the instructions here to disable Spotlight indexing and remove the menubar item. I reenabled indexing just fine, but when I changed the permissions back to 744, the spotlight search position came back (as in the space it would normally occupy), but the actual icon and search box will not show up. If I click that portion of the screen I get a blue box, but I can't type anything in to anything. Currently, permissions look like this: [~]$ ll /System/Library/CoreServices/Search.bundle.bak/Contents/MacOS/ total 648 -rwxr-xr-x 1 root wheel 835K Sep 17 14:48 Search* ll is an alias mapped to the following alias ll='${LS_PREAMBLE} -hl' with $LS_PREAMBLE [~]$ echo $LS_PREAMBLE ls -GF (Ignore the .bak extension. I decided that until I found a way to fully restore it, I would just remove it entirely following the directions here) That looks right to me and obviously something is launching, but the UI elements aren't there. So how can I restore it? Thanks in advance!

    Read the article

  • apache - subdomains are slower?

    - by matthewsteiner
    Using apache benchmark, I ran the exact same php application at my root domain and a subdomain. Even with multiple tests and high request numbers, the requests per second perform extremely differently. I mean something like this: example.com - 1200 requests per second bench.example.com - 50 requests per second What could be affecting this? These aren't using databases or anything, just mainly getting a simple page displaying. But it's the same app for both of them, and I'm wondering why they perform so differently. Ideas?

    Read the article

  • CentOS - Yum doesn't update anymore?

    - by Xanathos
    I've been trying to use yum now, but for some reason, not even the search work anymore. I even tried putting packages I already downloaded in the search criteria and is the same. [root@AMDFX03 Downloads]# yum search glibc Loaded plugins: fastestmirror, refresh-packagekit, security Loading mirror speeds from cached hostfile epel/metalink | 22 kB 00:00 * base: centos.secrel.com.br * epel: archive.linux.duke.edu * extras: centos.secrel.com.br * rpmforge: apt.sw.be * updates: centos.secrel.com.br adobe-linux-x86_64/primary | 1.2 kB 00:00 http://linuxdownload.adobe.com/linux/x86_64/repodata/primary.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. Error: failure: repodata/primary.xml.gz from adobe-linux-x86_64: [Errno 256] No more mirrors to try. This error always appear no matter what I do. Please, can you tell me how to fix this, or at least how to reset yum's configuration?

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • LVM, Soft RAID1, and Replication?

    - by mtkoan
    Hi all, I am practicing putting together a HA file server. It is a linux server with 2 1.5TB Hard drives. My plan is to use LVM to manage the physical volumes into logical volumes for /, /home, and /var. Then use md (soft RAID 1) to mirror the image onto the second HDD, THEN use DRDB to mirror the entire setup another server. Is this overkill? Would I just be okay with just md and DRDB? The system will serve user's homedirs (~100) and probably some groupware or other local intranet. On my own machines I've always separated root and /home partitions in case I break something, I can easily reinstall the OS. Should I follow that same theory here? If so I need LVM, because I really can't predict where we'll need more space, /var or /home.

    Read the article

  • xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory)

    - by mazgalici
    root@mazgalici:~# startx X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-28-server i686 Ubuntu Current Operating System: Linux mazgalici 2.6.18-194.26.1.el5.028stab079.2PAE #1 SMP Fri Dec 17 19:34:22 MSK 2010 i686 Kernel command line: quiet Build Date: 10 November 2010 11:25:26AM xorg-server 2:1.7.6-2ubuntu7.4 (For technical support please see ) Current version of pixman: 0.16.4 Before reporting problems, check to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jan 11 01:28:48 2011 (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory) Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log

    Read the article

  • Passenger_wsgi.py given precedence over DirectoryIndex?

    - by Walkerneo
    I was having an issue with my site today, where apache wasn't serving index.php by default. I had moved passenger_wsgi.py to the directory above document root so that I could serve python files without having to use PassengerAppRoot in the .htaccess file. I wanted to do this because I set up a development sub-domain on the site, and I wanted to use a different passenger_wsgi for the two domains, but that meant having different .htaccess files for the different PassengerAppRoots. Is there a way to have passenger_wsgi.py where it was and still let apache serve the index.phps? edit: I'm sorry, I'm tired. I just realized that the way this probably works is that passenger_wsgi.py is handling the routing instead of apache.

    Read the article

  • Simple HTTP server that will send the same file for all requests?

    - by Rory McCann
    I need to debug a XML-RPC application, which sends XML replies over HTTP. I have a sample XML reply (i.e. data from the server, sent to the client that isn't working), I'd like to debug my application. Ideally I'd like a simple HTTP server that will serve one file in reply to all requests. Someone requests /? Send them this file. Someone makes a post to /server/page.php with a certain cookie? Just send them this file. I don't care about multithreading, or security. I will only need to use this for a few hours to debug. I have root on the machine. i.e. I'm hoping there's something as easy to use as this: simple_http_server -p 12445 -f my_test_file I'm aware of python's SimpleHTTPServer module, but I'm not sure how to make it work in this case.

    Read the article

  • startx error no desktop manager

    - by WikiWitz
    I have Backtrack 5R2 KDE. I started recovery mode and did a failsafe xorg configuration. After that, I cannot load the KDE manager when I enter the startx command after logging in. Whenever I do a startx command (as root), the result resembles the following: This is not the actual output (I just drew this with MS paint because I cannot do a printscreen). The screen is just black with the icon in the upper left corner. The other pop-up menu appears when left-clicking the mouse. I tried the cp xorg.conf.failsafe xorg.conf advice from other websites with no luck. I have also tried the 'reconfigure option(s)' form the recovery mode with no success.

    Read the article

  • Windows (7 & Vista) laptop monitor does not come back after closing lid

    - by Scott Vercuski
    I'm experiencing an issue with Windows Vista (and now Windows 7) with my laptop. When I close the laptop lid the monitor goes blank but will not come back on after I re-open the laptop. The screen stays blank and nothing that I do will get it to come back. The laptop is an HP DV9000 series. Has anyone else ran into this issue? One of the solutions I've seen online is to go into the device manager and replace the lid driver with something nonsensical (the website suggested pointing the driver at the sound recorder). This does solve the problem by disabling the lid but doesn't really resolve the root issue. I'm asking if anyone has any method I can use to debug what's going on. How do I tell if it is an operating system issue vs. a malfunction within the lid itself. I'd actually like the lid to function as it's meant to. Thank you!

    Read the article

  • Running PHPmyAdmin on Nginx, port 8080 passed to varnish not working well!

    - by amrnt
    I installed Nginx, Varnish and PHP-fpm. Then I installed PHPmyAdmin and made a virtual host for it: server{ listen 8080; server_name phpmyadmin.Domain.com; access_log /var/log/phpmyadmin.access_log; error_log /var/log/phpmyadmin.error_log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include /opt/nginx/conf/fastcgi_params; } } When I go to phpmyadmin.Domain.com it works as expected! but after submitting username/password it redirects me to phpmyadmin.Domain.com:8080/index.php?... with page cannot be found response as well! What could I do?

    Read the article

  • HTTPS version of page throws 404, regular HTTP appears fine?

    - by Ryan
    I'm having a strange issue with a website in IIS on Windows Server 2003. It has a valid wild card certificate on it, however when I use HTTPS on the page I get a 404 not found. Without HTTPS it shows up fine. Also, if I go to the domain root of the site using HTTP the homepage shows up, but with HTTPS it REDIRECTS ME to a totally different website installed on the same IIS server. I am quite confused. I tried giving each site a unique IP address but it didn't change anything, I also tried changing the SSL ports, no luck. This IIS is setup to run PHP also. What could I check to resolve this?

    Read the article

  • How to start a service at boot time in ubuntu 12.04, run as a different user?

    - by Alex
    I have a server ClueReleaseManager which I have installed on a Ubuntu 12.04 system from a separate user (named pypi), and I want to be able to start this server at startup. I already have tried to create a simple bash script with some commands (login as user pypi, use a virtual python environment, start the server), but this does not work properly. Either the terminal crashes or when I try to ask the status of the service it is started and I am logged in as user pypi ...? So, here the question: What are the steps to take to make sure the ClueReleaseManager service properly starts up on boot time, and which I can control (start/stop/..) during runtime, while the service is running from a user pypi? Additional information and constraints: I want to do this as simple as possible Without any other packages/programs to be installed I am not familiar with the Ubuntu 12.04 init structure All the information I found on the web is very sparse, confusing, incorrect or does not apply to my case of running a service as a different user from root.

    Read the article

  • OpenVPN - client-to-client traffic working in one direction but not the other

    - by Pawz
    I have the following VPN configuration: +------------+ +------------+ +------------+ | outpost |----------------| kino |----------------| guchuko | +------------+ +------------+ +------------+ OS: FreeBSD 6.2 OS: Gentoo 2.6.32 OS: Gentoo 2.6.33.3 Keyname: client3 Keyname: server Keyname: client1 eth0: 10.0.1.254 eth0: 203.x.x.x eth0: 192.168.0.6 tun0: 192.168.150.18 tun0: 192.168.150.1 tun0: 192.168.150.10 P-t-P: 192.166.150.17 P-t-P: 192.168.150.2 P-t-P: 192.168.150.9 Kino is the server and has client-to-client enabled. I am using "fragment 1400" and "mssfix" on all three machines. An mtu-test on both connections is successful. All three machines have ip forwarding enabled, by this on the gentoo boxes: net.ipv4.conf.all.forwarding = 1 And this on the FreeBSD box: net.inet.ip.forwarding: 1 In the server's "ccd" directory is the following files: client1: iroute 192.168.0.0 255.255.255.0 client3: iroute 10.0.1.0 255.255.255.0 The server config has these routes configured: push "route 192.168.0.0 255.255.255.0" push "route 10.0.1.0 255.255.255.0" route 192.168.0.0 255.255.255.0 route 10.0.1.0 255.255.255.0 Kino's routing table looks like this: 192.168.150.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.150.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Outpost's like this: 192.168.150 192.168.150.17 UGS 0 17 tun0 192.168.0 192.168.150.17 UGS 0 2 tun0 192.168.150.17 192.168.150.18 UH 3 0 tun0 And Guchuko's like this: 192.168.150.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Now, the tests. Pings from Guchuko to Outpost's LAN IP work OK, as does the reverse - pings from Outpost to Guchuko's LAN IP. However... Pings from Outpost, to a machine on Guchuko's LAN work fine: .(( root@outpost )). (( 06:39 PM )) :: ~ :: # ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3): 56 data bytes 64 bytes from 192.168.0.3: icmp_seq=0 ttl=63 time=462.641 ms 64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=557.909 ms But a ping from Guchuko, to a machine on Outpost's LAN does not: .(( root@guchuko )). (( 06:43 PM )) :: ~ :: # ping 10.0.1.253 PING 10.0.1.253 (10.0.1.253) 56(84) bytes of data. --- 10.0.1.253 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms Guchuko's tcpdump of tun0 shows: 18:46:27.716931 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 1, length 64 18:46:28.716715 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 2, length 64 18:46:29.716714 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 Outpost's tcpdump on tun0 shows: 18:44:00.333341 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 18:44:01.334073 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 4, length 64 18:44:02.331849 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 5, length 64 So Outpost is receiving the ICMP request destined for the machine on it's subnet, but appears not be forwarding it. Outpost has gateway_enable="YES" in its rc.conf which correctly sets net.inet.ip.forwarding to 1 as mentioned earlier. As far as I know, that's all that's required to make a FreeBSD box forward packets between interfaces. Is there something else I could be forgetting ? FWIW, pinging 10.0.1.253 from Kino has the same result - the traffic does not get forwarded. UPDATE: I've found that I can only ping certain IP's on Guchuko's LAN from Outpost. From Outpost I can ping 192.168.0.3 and 192.168.0.2, but 192.168.99 and 192.168.0.4 are unreachable. The same tcpdump behavior can be seen. I think this means the problem can't be due to ipforwarding or routing, because Outpost can reach SOME hosts on Guchuko's LAN but not others and likewise, Guchuko can reach two hosts on Outpost's LAN, but not others. This baffles me.

    Read the article

  • Deny directory browsing in a Proftpd / Ubuntu Installation

    - by skylarking
    I used this guide to set up a Proftpd installation an Ubuntu 8.04 server... Works well, but the generic user ( userftp ) can run ls and is able to change to any Directory and browse freely on the server ..from the root / and upwards.. I added this line to etc/shells /bin/false in hopes that that would prevent this ... I really only want the userftp account to be able to upload to the generic /home/FTP-Shared directory, and be able to do nothing else on the server. How is this accomplished ... This is a headless Ubuntu box..and I am using CLI only .. no GUI admin tools

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • curl XPUT returning HTTP 500 error message

    - by pradeepchhetri
    I have added the following changes in nginx configuration. server { listen 8080; root /usr/share/nginx/www; client_body_temp_path /tmp/; dav_methods PUT DELETE MKCOL COPY MOVE; create_full_put_path on; dav_access user:rw group:rw all:rw; } I have my nginx configured with --with-http_dav_module also. But when I am trying to running the command: $ curl -XPUT http://172.16.31.127:8080/test.html -d 'test' I am getting 500 Internal Server error. Can anyone help me out in solving this.

    Read the article

  • add script on boot linux machine

    - by user1546679
    I have one script to start a service on my ubuntu. I added it on boot machine using "# update-rc.d projeto defaults". But it still doesn't start with the boot machine. I think is because I am using other user to start the script "su - www-data -c ...". But I am not sure, because I run the update-rc.d command as root. When I execute the script from a terminal, it asks the password of the user www-data. Does anyone know what is happening? Thanks a lot! Felipe #!/bin/bash # /var/www/boinc/projeto/bin/start function action { su - www-data -c "/var/www/boinc/projeto/bin/$1" } case $1 in start|stop|status) action $1 ;; *) echo "ERRO: usar $0 (start|stop|status)" exit 1 ;; esac

    Read the article

  • Dealing with LDAP failure when using it for PAM/NSS?

    - by Insyte
    I use a redundant pair of OpenLDAP servers for PAM auth and directory services via NSS. It's been 100% reliable so far, but nothing runs flawlessly forever. What steps should I take now so I have a fighting chance of recovering from failure of the LDAP server(s)? In my informal testing, it appears that even already authenticated shells are largely useless as all username/uid lookups hang until the directory server comes back. So far I've come up with only two things: Do not use NSS-LDAP and PAM-LDAP on the LDAP servers themselves. Create a root-level account on all boxes that only accepts publickey authentication from our local subnet and protect that key well. I'm not sure how much good this would do me as once I'm logged in, I suspect I wouldn't be able to accomplish anything since all the userid lookups would be hanging. Any other suggestions?

    Read the article

  • How do I run Munin plugins written in Ruby, using RVM?

    - by hlg
    Hi! I try to run some Munin plugins written in Ruby. I would like to use RVM, so Munin needs to know where to find Ruby. I tried to change the line calling munin-cron in the cron file as follows: */5 * * * * munin bash -c 'source /usr/local/lib/rvm && rvm 1.9.2@munin && /usr/bin/munin-cron' This leads to error messages in munin-node.log, saying /usr/bin/env: ruby: No such file or directory When I change the plugins' shebangs to the actual path of the Ruby executable it works, but the RVM environment should be set so that '/usr/bin/env ruby' works. It does when I execute the plugins as root. Any ideas?

    Read the article

  • supervise/daemontools conflicts with apache -D FOREGROUND

    - by Kevin G.
    Hoping that somebody can help us understand this behavior. We've got a bunch of daemontools services under /etc/service/. One of the services controls apache, and the run script has this in it. exec envdir /var/lib/supervise/wwwproxy/env setuidgid root bash <<-BASH ulimit -n 8192 # also increase the running user's file descriptor limit exec apache2 -f /path/to/demo_apache2.conf -D FOREGROUND BASH We were having the problem that svc -d /etc/service/* actually had the effect of restarting all the services, it didn't take them down. We finally tracked it down to that one service, and found that svc -d /etc/service/apache2 would bring up any other service was down, including itself. Changing FOREGROUND to NO_DAEMONIZE fixes the behavior, but we'd really like to understand what's going on. Can anybody explain why an svc -d on one service would bring an other service up? Thanks for any clue you can offer.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >