Search Results

Search found 12569 results on 503 pages for 'root plist'.

Page 168/503 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Why is my nginx alias not working?

    - by Rob
    I'm trying to set up an alias so when someone accesses /phpmyadmin/, nginx will pull it from /home/phpmyadmin/ rather than from the usual document root. However, everytime I pull up the URL, it gives me a 404 on all items not pulled through fastcgi. fastcgi seems to be working fine, whereas the rest is not. strace is telling me it's trying to pull everything else from the usual document root, yet I can't figure out why. Can anyone provide some insight? Here is the relevant part of my config: location ~ ^/phpmyadmin/(.+\.php)$ { include fcgi.conf; fastcgi_index index.php; fastcgi_pass unix:/tmp/php-cgi.sock; fastcgi_param SCRIPT_FILENAME /home$fastcgi_script_name; } location /phpmyadmin { alias /home/phpmyadmin/; }

    Read the article

  • Running commands on FreeBSD Live CD

    - by jmc
    I'm running FreeBSD 9.1-PRERELEASE on a vps running on XEN virtualization, I tried to update it to 9.1-RELEASE but mergemaster toasted my /etc/master.passwd and /etc/passwd so what i have now is a blank copies of the two files. What i did is use a mounted Live CD and mount my root partition to /mnt and manually re listed every entry to /mnt/etc/master.passwd and /mnt/etc/passwd from another freebsd server. I believe that everytime you edit master.passwd and passwd you have to run pwd_mkdb but this gives me "Read Only File" error. What I plan to do is enable PermitRootLogin and PermitEmptyPassword first so I can login as root first before I redo necessary changes again. But i have to run pwd_mkdb, so is there a way to run this command from Live CD?

    Read the article

  • How do I find out when and by whom a particular user was deleted in linux?

    - by executor21
    I've recently ran into a very odd occurrence on one system I'm using. For no apparent reason, my user account was deleted, although the home directory is still there. I have root access, so I can restore the account, but first, I want to know how this happened, and exactly when. Inspecting the root's .bash_history file and the "last" command gave nothing, and I'm (well, was) the only sudoer on the system. How would I know when this deletion happened? The distro is CentOS release 5.4 (Final), if that helps.

    Read the article

  • Puppet - pass variable with a file create command

    - by Tim Brigham
    I need a way to pass a given variable - lets say thearch - to several different files within a given class. I need to be able to state the contents of this variable for each file individually. I have tried the following: file { "xxx": thearch => "i386", path => "/xxx/yyyy", owner => root, group => root, mode => 644, content => template("module/test.erb"), } This doesn't pass this variable so I can use it with a <%=thearch% statement within the erb file as I expect. What am I doing wrong here?

    Read the article

  • Updating and deleting java (red hat / centos)

    - by JochemTheSchoolKid
    I am a total noob with linux. So please explain clearly if you have a solution for me. I have an VPS and I want to update JAVA. I found a guide on the Java site which says: rpm -e < package_name I searched for the packages: [root@srv1 ~]# rpm -qa | grep java java_cup-0.10k-5.el6.x86_64 java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 Than I tried to do the delete command [root@srv1 ~]# rpm -e java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 error: Failed dependencies: java-gcj-compat is needed by (installed) java_cup-1:0.10k-5.el6.x86_64 java-gcj-compat >= 1.0.70 is needed by (installed) sinjdoc-0.5-9.1.el6.x86_64 What should I do now?

    Read the article

  • Why does my CentOS logrotate run at random times?

    - by Mike Pennington
    I put a logrotate configuration file in /etc/logrotate.d/ and expected the logs to rotate at a consistent time; however, they do not... log rotation times are seemingly random +/- one hour. Why are the log rotation start times random, and how can I change this? Informational: my logrotate config file looks like this... /opt/backups/network/*.conf { copytruncate rotate 30 daily create 644 root root dateext maxage 30 missingok notifempty compress delaycompress postrotate ## Create symbolic links in daily/ PATH=`/usr/bin/dirname $1`; FILE=`/bin/basename $1`; /bin/ln -s $1 $PATH/daily/$FILE endscript }

    Read the article

  • Postgres Remote Access

    - by boot-baby-boot
    I am trying to connect to postgres remotely.I have followed this tutorial http://www.cyberciti.biz/faq/howto-fedora-linux-install-postgresql-server/ and have executed the following commands to see if the remote access is possible. [root@printmyworld ~]# egrep -i "(listen_addresses|port|tcpip_socket).*=.+" /var /lib/pgsql/data/postgresql.conf #listen_addresses = '*' # what IP address(es) to listen on; #port = 5432 [root@printmyworld ~]# lsof +c0 -anPiTCP -upostgres COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME postmaster 9323 postgres 3u IPv4 2875987353 TCP 127.0.0.1:5432 (LISTEN ) postmaster 9323 postgres 4u IPv6 2875987354 TCP [::1]:5432 (LISTEN) I am suspicious of this line: postmaster 9323 postgres 3u IPv4 2875987353 TCP 127.0.0.1:5432 (LISTEN My server ip address is 1yy.000.1xx.000 .Should it be 1yy.000.1xx.000:5432

    Read the article

  • nginx redirects and rewrites

    - by ptheofan
    I'm closing a website but want to maintain a couple of urls working plus a static html file to serve as index. All old urls should redirect to root (/) except a couple of chosen locations. Here's an example of what I need to do All should give 301 permanent to / http:://www.domain.tld/whatever/anything/realy == 301 ==> http://www.domain.tld http:://www.domain.tld/blabla == 301 ==> http://www.domain.tld http:://www.domain.tld/ == 301 ==> http://www.domain.tld except for http://www.domain.tld/special.html == serve ==> special.html root should serve the defailt file (as specificed in index) http:://www.domain.tld == serve => somefile.html

    Read the article

  • Apache not serving pages stored in Subversion repository

    - by Stephen
    I've setup Apache and Subversion on an old PC, but Apache is not serving pages correctly, when I enter the address to my test site: http://HOME_IP_ADDRESS/test/index.html I just get a File Not Found error and the following output in the error log: File does not exist: /var/www/html/svn/repos/test but I know the file exists, when I enter the following URL into the browser: http://HOME_IP_ADDRESS/repos/test/index.html I just get a listing of the HTML. In my Apache config file I have the Document Root set as follows: DocumentRoot "/var/www/html/svn/repos" so I'm not sure what is going on, I have SVN installed and I think it may have something to do this. Edit * I changed the Document Root location, which helped as pages in the new location were served correctly, so the problem is with just serving the pages from the repository.

    Read the article

  • Clearing the terminal before displaying MOTD

    - by user1417933
    When I connect to my SSH server, it prompts me for the user name and password. After I have authenticated, it will display my MOTD, then show user prompt, like this: Using username "root". Authenticating with public key "everssh" this is my motd root@debian:~# I want to edit some file so that the screen is cleared before the MOTD prints (so basically calling the clear command would do). I heard that the MOTD is displayed by using cat /etc/motd in a startup file, however after searching around I can't find where it is called from. Does anyone know how I can find it?

    Read the article

  • ubuntu 9.10 on an external usb drive: grub1 does not work

    - by Toc
    I have installed Ubuntu on a partition of my external usb drive. Since I had problems with grub2, I have uninstalled it and installed grub1. But then the usb drive didn't boot anymore, and I am forced to the limited shell of grub1. If I write manually kernel (hd0,4)/vmlinuz-2.6.31-15-generic root=/dev/sdb4 ro quiet splash initrd (hd0,4)/boot/initrd.img-2.6.31-15-generic boot then Ubuntu is loaded, but if I execute the commands root (hd0,4) setup (hd0) as explained at http://www.gnu.org/software/grub/manual/html_node/Installing-GRUB-natively.html#Installing-GRUB-natively, next time I boot from usb I am forced again to the grub limited shell. How can I restore a working grub?

    Read the article

  • How to stop LDAP authentication in ubuntu?

    - by Kery
    My OS is Ubuntu 12.04 and use LDAP authentication. Now I meet a problem that another people want to access my system. But he is in another domain so he can't login. And I have no right to change this configuration in LDAP server. So I have to choose a workaround to solve this problem, for example close the LDAP authentication and use local authentication (I have root right in my system) or create another account which is not registered in LDAP server (I did this but can't change the created account password. The error is 'password reset by root is not supported'). Of course any other suggestion is appreciated! Than you in advance!

    Read the article

  • Can't access phpMyAdmin because of host, username and password

    - by Engprof
    everyone. When I try to access phpMyAdmin on Uniform Server I get the following error messages: " #1045 - Access denied for user 'root'@'localhost' (using password: YES) " " phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server. " The funny thing is my username and password are both set to "root" and I have changed the IP address in the httpd.conf file to my Unique IP adddress, so I still don't know what the problem is. Could somebody please help me out? Any help would be much appreciated.

    Read the article

  • adding ftp user on ubuntu

    - by user46250
    I followed this tut http://www.trainsignal.com/blog/how-to-set-up-safe-ftp-in-linux to setup an ftp server with user account sudo mkdir -p /home/ftp/ftpuser sudo useradd ftpuser -d /home/ftp/ftpuser -s /bin/false sudo passwd ftpuser when I tried to connect with login ftpuser remotely it didn't work. It didn't work even with root UNLESS I removed root from ftpusers. I am confused ftpusers are the users NOT allowed to do ftp ? Where are the list of users allowed then and why can't I connect with ftpuser I created ?

    Read the article

  • sudo rejects password that is correct

    - by Ryan
    sudo (Which I have configured to ask for a password) is rejecting my password (as if I mis-typed it) I am absolutely not typing it incorrectly. I have changed the password temporarily to alphabetic characters only, and it looks fine in plaintext, in the same terminal. I have my username configured thus: myusername ALL=(ALL) ALL I am using my password, NOT the root password, which are distinct. Just to be sure, I've tried both (even though I know the root password is not what I should use) - neither work. I have added myself to the group 'wheel' additionally, and included the following line: %wheel ALL=(ALL) ALL I'm kind of at the end of my rope here. I don't know what would cause it to act as though it was accepting my password, but then reject it. I have no trouble logging in with the same password, either at terminal shells, or through the X11 login manager.

    Read the article

  • RUNIT - created first service directory, "sv start testrun" does not work

    - by Veseliq
    I'm pretty new to runit. I installed it on a Ubuntu host. What I did: 1) created a dir testrun in /etc/sv 2) created a script run in /etc/sv/testrun/run, the script content: #! /bin/bash exec /root/FP/annotate-output python /root/FP/test.py | logger -t svtest 3) If I call directly /etc/sv/testrun/run it executes successfully 4) I run sv start testrun (or sv run testrun, sv restart testrun), all of them end up with the same error msg: fail: sv: unable to change to service directory: file does not exist Any ideas what am I doing wrong? I'm new to runit and base all my actions on the information found here: http://smarden.org/runit/

    Read the article

  • How to get more info out of the uninformative Windows 8 BSoD?

    - by amiregelz
    Windows 8's Blue Screen of Death is different from the previous Windows versions' one: In order to find out what caused the problem you need to write down or remember the search term it presents you with. The two search terms I have seen suggested so far are SYSTEM_SERVICE_EXCEPTION and HAL_INITIALIZATION_FAILED. Both of them are quite generic terms. While it’s nice not to have to look at a blue screen full of text, the previous BSoD was more informative than the Windows 8 BSoD, since it contained a detailed error code (information for diagnostic purposes that was collected as the operating system performed a bug check), which could get you closer to tracking down the root of the problem. How can I get more information about the error Windows 8 has encountered, in order to track down the root of the problem?

    Read the article

  • Deploy multiple django instances on one Host [migrated]

    - by tvn
    I am trying to setup multiple Django instances on one Host with lighttpd. My problem is to get Djangos FCGI working on subdirectories served by my Webserver. So my aim is the following: www.myhost.org/django0 - django1.fcgi on localhost:3000 www.myhost.org/django1 - django2.fcgi on localhost:3001 www.myhost.org/django2 - django3.fcgi on localhost:3002 Unfortunately the following configuration doesn't even work for one: $HTTP["url"] =~ "^/django0/static($|/)" { server.document-root = "/home/django0/django/static/" } $HTTP["url"] =~ "^/django0/media($|/)" { server.document-root = "/usr/lib/python2.7/dist-packages/django/contrib/admin/media/" } $HTTP["url"] =~ "^/django0($|/)" { proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "3001", "check-local" => "disable", ) ) ) } The only response I get is an 404 and even this takes a long time till I get this. I found nothing suspicious neither in the access.log nor in the error.log.

    Read the article

  • How to change memory for DomU runtime

    - by saffron
    I have a xen server with xen-4.1.3, linux-image-3.2.0-3-amd64, debian squeeze and 16Gb of RAM. The domain-0 has 1Gb of ram, the rest of memory belongs to the hypervisor. I want to start a guest domain with a minimal amount of memory and increase it runtime later. When I start a guest domain with 256Mb of ram and run xm mem-set domu 4Gb, I get ~3Gb only in domu and a guest domain free says: root@test:~# free total used free shared buffers cached Mem: 2830620 72868 2757752 0 2432 43504 -/+ buffers/cache: 26932 2803688 Swap: 1048572 0 1048572 And a guest domain dmesg says: [ 0.000000] Memory: 175912k/2883584k available (3527k kernel code, 448k absent, 2707224k reserved, 3210k data, 612k init) When I start a guest domain with 2Gb of ram I can run xm mem-set domu 7Gb and get ~7Gb of ram in a guest domain: root@test:~# free total used free shared buffers cached Mem: 6828228 74944 6753284 0 1328 12568 -/+ buffers/cache: 61048 6767180 Swap: 1048572 0 1048572 And a guest domain dmesg: [ 0.000000] Memory: 1674960k/16651264k available (3527k kernel code, 448k absent, 14975856k reserved, 3210k data, 612k init) How can I start a guest domain with a minimal amount of ram (256Mb) and increase it under 15Gb?

    Read the article

  • Cannot connect Solaris 10 to Windows 7

    - by user999353
    I'm trying to connect a Solaris VM (powered by VMware Player) to Windows Explorer in Windows 7. When I try to map the network drive I get the following: The specified server cannot perform the requested operation. I am using the following URL which has worked on a machine I used before. The only thing that has changed is the IP address of the Solaris machine. I am able to connect to the VM via putty. \\1.2.3.4\xxx\yyy I checked and I think samba is running.. root 375 1 0 09:53:39 ? 0:00 /usr/sfw/sbin/smbd -D root 376 375 0 09:53:40 ? 0:00 /usr/sfw/sbin/smbd –D Anyone any ideas?

    Read the article

  • Applications on my laptop crashees every 10-20 minutes [closed]

    - by user1731110
    In my windows 7 home premium, several of my application crashes after some minutes even wehn I am not touching system. For example I have outlook 2007, that works fine, but suddenly crashes. The same about visual studio 2012. It is also crashes after some minute no matter if I am working on a project or I am not touching it. the same behaviour is there, for other applications (skype, oovoo and ...). I use Microsoft security essential and there is no virus on my system. I also checked my system with Sophos anti root kit and there is no root kit there. What would be the problem?

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • NGINX Document Location

    - by GLaDOS
    I want to be able to access a given url, example.com/str. The problem is that the php file that I want to connect to is in a directory of /str/public/. In my nginx logs, I see that it is trying to connect to /str/public/str/index.php. Is there any way to remove that last 'str' in the document request? Below is my location directive in sites-available/default: location /str { root /usr/share/nginx/html/str/public/; index index.php index.html index.htm; location ~ ^/str/(.+\.php)$ { try_files $uri = 404; root /usr/share/nginx/html/str/public/; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } Thank you all so much in advance.

    Read the article

  • sa2 -A /var/log/sa/sa13: No such file or directory

    - by user53925
    I have systat version 7.0.2 and the /etc/sysconfig/sysstat has the entry HISTORY=27, this is on a redhat enterprise server 5.6, the cron setup for this is # run system activity accounting tool every minute * * * * * root /usr/lib64/sa/sa1 1 1 # generate a daily summary of process accounting at 23:53 53 23 * * * root /usr/lib64/sa/sa2 -A I get the following error from the cron sa2 -A find: /var/log/sa/sa13: No such file or directory, Looking at the directory /var/log/sa the files are created from sa01 through sa10 (sa1 created on sep1, sa2 created on sep2 and so on), then the rest of the files are from sa14 through to sa 31 (created from Aug 14 to Aug 31). I have not made any changes on the server so I am not sure why I am getting these error messages and is there a way to fix this?. Someone suggested creating empty files from sa11 through sa14 to fix this but I am not sure if this might mess up something .

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >