Search Results

Search found 12017 results on 481 pages for 'no root'.

Page 293/481 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Forgot password to development database

    - by ninja08
    I've created a database using terminal while following along with a tutorial. Although I had a lot of trouble getting the databases to install. Now after finally getting it to work I changed a few things, actually just the name of the database using the rake command to just "next". The password should be 'secret password'. How can I change the password or find out what it is or change it? It doesn't seem to be edited my databases.yml file with the password, especially since it still just says 'root' as username with now password in there.

    Read the article

  • Webmin apache on CentOS 6.3 results in 403 forbidden, permissions are OK

    - by Mario De Schaepmeester
    First of all, I will mention that the permissions are fine for the document root directory, which is /webapps/nimbus/www/public_html The www directory contains a PHP application. PHP is a problem for later if it doesn't work, as I've tested it with a plain html file (does not work either) I just get 403 forbidden responses. The permissions are 755 on webapps and all subdirectories. I've checked other questions here and on the internet, but it was all about those permissions. Whatever info you still need, just ask, I don't know what's relevant as it's the first time ever I'm using webmin or configuring apache.

    Read the article

  • (Preferably) Encrypted Server Backups

    - by Shoaibi
    I have somehow managed to purchase a VPS after collecting money for sometime, now problem is i cant find a way to backup the server. My previous approach was: Got a webdav account from mydisk.se, mounted it on the vps, used duplicity and created encrypted backups. Problem is it was only 2G, and its running out of space, at my own place i dont have a stable internet connection else i have a 500G drive that i could surely use for backups. The vps has a 12G HD, and i would like to backup /home, /root, /etc, /var/ (specially log and www). Any ideas are welcomed. [EDIT] I am more of looking for resource of setting up a backup-point or such(i know how to setup a backup server, but i cant as i dont have stable connection or the money to buy another VPS/disk for backup) , i have already got the tools needed.

    Read the article

  • PHP does not allow https connections

    - by FunkyChicken
    Hey guys im running PHP 5.4.0 and I cannot cURL nor files_get_content() https connections. Using curl in a PHP script shows: [root@ns1]# /opt/php/bin/php -q test.php * About to connect() to www.google.com port 443 * Trying 74.125.225.210... * connected * Connected to www.google.com (74.125.225.210) port 443 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none Segmentation fault Using file_get_contents() shows: Warning: file_get_contents(): Unable to find the wrapper "https" - did you forget to enable it when you configured PHP? in /test.php OpenSSL and OpenSSL-devel are installed, and PHP is also configured with cURL support for SSL connections. See: http://i.imgur.com/ExAIf.png Any idea what might be going wrong? Further info: CentOS 5.8(64) with Nginx 1.2.4

    Read the article

  • Trying to install SawMill and getting the following error:

    - by Itai Ganot
    [root@sawmill sawmill]# ./sawmill ./sawmill: error while loading shared libraries: libldap-2.3.so.0: cannot open shared object file: No such file or directory Using yum provides libldap_r-2.3.so.0 i found that the package which includes this file is: compat-openldap-2.3.43-2.el6.i686 . After installing it i still get the error. If i use locate, i can find the file in /usr/lib, so I tried to create a symbolic link to the file from /usr/lib to /usr/lib64 but i still get the same error. I also tried setting LD_LIBRARY_PATH=/usr/lib/ and LD_LIBRARY_PATH=/usr/lib64 but it doesn't allow me to run the sawmill installation script. Anyone knows how to solve this issue?

    Read the article

  • How can I remove the ssh last login info?

    - by Gnijuohz
    Whenever I log into a server using ssh. The prompt gives me "last login" information. I was wondering where this information comes from. How can I remove this record so when someone else log into the same server, the person would see my login info with my ip in it? So how can I do this? For the record, I am not hacking someone's computer and the server runs Ubuntu 12.04. EDIT: which file logs this kind of information? If I find the file, then I can do anything to it as root. Thanks.

    Read the article

  • Getting molly-guard to work with sudo

    - by 0xC0000022L
    The program molly-guard is a brilliant little tool which will prompt you for a piece of information before you reboot or shut down a system. Usually it asks for the hostname. So when you work a lot via SSH, you won't end up taking down the wrong server, just because you were in the wrong tab or window. Now, this works all fine when you say reboot on the command line while you are already root. However, it won't work if you do sudo reboot (i.e. it won't even ask). How can I get it to work with sudo as well? System: Raspbian (latest, including updates), package molly-guard version 0.4.5-1.

    Read the article

  • HDFS datanode startup fails when disks are full

    - by mbac
    Our HDFS cluster is only 90% full but some datanodes have some disks that are 100% full. That means when we mass reboot the entire cluster some datanodes completely fail to start with a message like this: 2013-10-26 03:58:27,295 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Mkdirs failed to create /mnt/local/sda1/hadoop/dfsdata/blocksBeingWritten Only three have to fail this way before we start experiencing real data loss. Currently we workaround it by decreasing the amount of space reserved for the root user but we'll eventually run out. We also run the re-balancer pretty much constantly, but some disks stay stuck at 100% anyway. Changing the dfs.datanode.failed.volumes.tolerated setting is not the solution as the volume has not failed. Any ideas?

    Read the article

  • The cd command using variable to mapped NFS volume within ssh in linux script is not working

    - by Bhavya Maheshwari
    I have to do the following from within a bash script. The /VMNFS/ folder is present on linux box, from where script is run, and is mapped to the machine into which i am ssh'ing, as an NFS at /vmfs/volumes/VMNFS/. The second cd command doesn't work, neither with symbolic path name nor physical pathname. Why? and How to rectify this? #!/bin/bash ssh -2 [email protected] /bin/sh <<\EOF vmfile_path=`grep / vmvar_file` datastore_path=/vmfs/volumes/VMNFS/ cd $datastore_path && echo "The present working directory is" `pwd -P` esxi_vmfile_path_sub=`pwd -P` && echo "variable value is" $esxi_vmfile_path_sub esxi_vmfile_path=`echo $vmfile_path | sed "s:/VMNFS:$esxi_vmfile_path_sub:"` cd "$esxi_vmfile_path" EOF ***Output***: The current working directory is /vmfs/volumes/65335ec4-46d12e41 variable value is /vmfs/volumes/65335ec4-46d12e41 can't cd to /vmfs/volumes/65335ec4-46d12e41/TPAE7.5/

    Read the article

  • How do I configure DFS replication using the command line?

    - by zneak
    Hello everyone, I'm in the process of making a script to automate DFS creation and replication for an exam I have next week. So, assuming I have a namespace: dfsutil root adddom \\Foo\bar 'My namespace' And I have a link: dfsutil link add \\Foo\Bar\CoolStuff \\Server2\CoolStuff 'Neat stuff' How can I use the command line to replicate \\Server2\CoolStuff over, say, \\Server3\CoolStuff? When I use dfscmd: dfscmd /add \\Foo\Bar\CoolStuff \\Server3\CoolStuff It says it ended correctly, but opening up the MMC shows that there are no replication groups for CoolStuff. Thanks!

    Read the article

  • Can't figure out how to make Slitaz USB persistent

    - by Dennis Hodapp
    I installed Slitaz on my USB. However I can't figure out how to make it persistent automatically. There are different sources telling me different ways to make it persistent. One told me to add "slitaz home=usb" to the syslinux.cfg file like this: append initrd=/boot/rootfs.gz rw root=/dev/null vga=normal autologin slitaz home=usb but it didn't work for me. http://www.slitaz.org/en/doc/handbook/liveusb.html gave an example of how to do it manually but I didn't try it and I also want it to happen automatically. custompc.co.uk/features/602451/make-any-pc-your-own-with-linux-on-a-usb-key.html is an older article that also explains how to make the USB persistent but I don't want to try it cause it looks outdated (from 2008) does anyone know the best way to make the USB automatically persistent?

    Read the article

  • Can Solaris RBAC roles be ported to Linux using SElinux only?

    - by Jimmy
    We are migrating an application from Solaris to Linux and the main user is allowed, through the use of RBAC roles, to run a few system commands like svccfg/svcadm (chkconfig on redhat). Is it possible, using only SElinux (no sudo), to allow a normal user to run chkconfig off/on (basically give it the ability to add remove services) ? My approach was to try to create an SElinux user with a corresponding SElinux role that manages the app's domain/type and is allowed to transition to all other domains required to run chkconfig, tcpdump or any other system utility usually restricted to root access only. All my attempts so far have failed, so my second question would be where could I find good documentation that applies to this specific problem ?

    Read the article

  • openssl creates invalid signature if run by a different user

    - by divB
    Very strange problem here: openssl successfully creates signatures but only those created as root are valid whereas created by another user (www-data) are invalid! All files are readable and there are not error messages: # echo -ne Test | openssl dgst -ecdsa-with-SHA1 -sign activation.key > /tmp/asRoot.der # su www-data $ echo -ne Test | openssl dgst -ecdsa-with-SHA1 -sign activation.key > /tmp/asWww-data.der $ uname -a Linux linux 2.6.32-5-openvz-amd64 #1 SMP Mon Feb 25 01:16:25 UTC 2013 i686 GNU/Linux $ cat /etc/debian_version 6.0.7 Both files (asRoot.der and asWww-data.der) are transfered to a different computer for verification with the public key: $ echo -ne Test | openssl dgst -verify activation.pub -keyform DER -signature asRoot.der Verified OK $ echo -ne Test | openssl dgst -verify activation.pub -keyform DER -signature asWww-data.der Verification Failure That can't be true! What's wrong here?

    Read the article

  • Lost sudo/su on Amazon EC2 instance

    - by barrycarter
    I have an Amazon EC2 instance. I can login just fine, but neither "su" nor "sudo" work now (they worked fine previously): "su" requests a password, but I login using ssh keys, and I don't think the root user even has a password. "sudo <anything>" does this: sudo: /etc/sudoers is owned by uid 222, should be 0 sudo: no valid sudoers sources found, quitting I probably did "chown ec2-user /etc/sudoers" (or, more likely "chown -R ec2-user /etc" because I was sick of rsync failing), so this is my fault. How do I recover? I stopped the instance and tried the "View/Change User Data" option on the AWS EC2 console, but this didn't help. EDIT: I realize I could kill this instance and create a new one, but was hoping to avoid something that extreme.

    Read the article

  • Set global handling for PHP scripts in NGINX + PHP-FPM

    - by Radio
    I have to define fastcgi_pass for every virtual host. How do I define it global-wise? server { listen 80; server_name www.domain.tld; location / { root /home/user/www.domain.tld; index index.html index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/user/domain.tld$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Best way to block a country by IP address?

    - by George Edison
    I have a website that needs to block a particular country based on IP address. I am more than aware that IP-based blocking is not a foolproof method for blocking visitors, but it is a necessary step in the right direction. Since I'm using PHP, what I would do is use a GeoIP database like geoplugin.net. However, I'm curious to know if there's a better way of doing this. The website is on a shared webserver (I don't have root access) and it is running Apache on centOS. I guess my question is "can an .htaccess file be configured to block by IP using an external source to lookup IP addresses."

    Read the article

  • UNIX "find" command, match literal "dot"

    - by Robottinosino
    I need files ending with ".pdf" or ".png"; here's my attempt: find /Users/robottinosino/Desktop/_PublishMe_ -type f -regex '.*[pdf|png]' this incorrectly includes files ending with "Apdf", "Zpdf", etc. (missing literal dot before file extension) I tried adjusting the pattern to: find /Users/robottinosino/Desktop/_PublishMe_ -type f -regex '.*\.[pdf|png]' but then no results are returned. Escaping the . with a backslash does not work. Why? [0] $ uname -a Darwin Robottinosino.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 Thanks!

    Read the article

  • Cannot acess the new cloned server even after new IP address assignment

    - by tough
    I was able to clone a Ubuntu 10.04 server residing in Cloud. It appeared that I was not getting some IP for the new VM so I followed some of these: # cd /etc/udev/rules.d # cp 70-persistent-net.rules /root/ # rm 70-persistent-net.rules # reboot I didn't follow the later commands as I was unable to see two eth MACs as available in the referenced site. After this I am able to see some the IP for it, and is different form the original IP, I have added new IP to DNS server. Now when I try to access it with its assigned(new) domain it is directed to the old server. I can see both the VMs running with different IP. Where I might have gone wrong, I am new to this admin thing.

    Read the article

  • How to remotely install Linux via SSH?

    - by netvope
    I need to remotely install Ubuntu Server 10.04 (x86) on a server currently running RHEL 3.4 (x86). I'll have to be very careful because no one can press the restart button for me if anything goes wrong. Have you ever remotely installed Linux? Which way would you recommend? Any advice for things to watch out? Update: Thanks for your help. I managed to "change the tires while driving"! The main components of my method are drawn from HOWTO - Install Debian Onto a Remote Linux System, grub legacy: Booting once-only, grub single boot and kernel panic reboot , and Ubuntu Community Documentation: InstallationFromKnoppix Here is the outline of what I did: Run debootstrap on an existing Ubuntu server Transfer the files to the swap partition of the RHEL 3.4 server Boot into tha swap partition (the debootstrap system) Transfer the files to the original root partition Boot into the new Ubuntu system and finish up the installation with tasksel, apt-get, etc I tested the method in a VM and then applied to the server. I was lucky enough that everything went smoothly :)

    Read the article

  • Git and Amazon EC2 public key denied

    - by MrNart
    I had git working before on /var/html/projectfolder and realized it was a security risk so I made a new folder /projects from the root folder and tried to replicate what I did and now it doesnt work. Here is the backlog of what I did for my local machine and EC2 - server Server-EC2 1.I added my public key to the authorized_user file in ~/.ssh folder 2.Create a bare repository git init --bare 3.Change folder permissions to sudo chgrp -R ec2-user * sudo chmod -R g+ws * Local Machine create a local repository with git init touch, add, commit readme file pointed origin master to ec2 via git remote add origin ssh://ec2-user@remote-ip/path/to/folder This is my output: Permission Denied (publickey) fatal: The remote end hung up unexpectedly

    Read the article

  • This weird behaviour from cronjob

    - by The DOCTOR from TARDIS
    I have set the crontab like this: */5 0 * * * /www/permitChat.sh and the /www/permitChat.sh is this: # We are setting the name of file # in the variable along with complete path. sFilePath=`date +\/www\/ChatLogs\/%Y\/%m/%d_%m_%Y.txt` # First we set its permissions to # readable by all users, and then # modify them to be writable by only root. chmod a=r $sFilePath chmod u+w $sFilePath ls -lh $sFilePath The trouble I am facing is, the cron gets executed after 12:00 PM everyday, instead of executing at 12:00 AM to 01:00 AM, every 5 minutes. What could be wrong? All my system variables appear to be synced.

    Read the article

  • starting Tomcat from Eclipse

    - by Krns
    I've just got eclipse for jee installed, and also downloaded apache 6.0.29 and added it in eclipse preferences. I followed up this tutorial, and all i got when i run it is 404 page with "The requested resource (/WebService/) is not available." description. I can't even access root folder of tomcat at localhost:8080. However, if i start tomcat through concole command, it's working fine and home page and examples are accessible. I'm a complete noob at tomcat and eclipse(been working with netbeans before), so i've got no idea what's wrong.

    Read the article

  • dns hierarchy not working !! Please help

    - by nikhilelite
    (DNS1 ,WWW1, Gateway1) (sub-internal network) (DNS0,WWW0,Gateway0) (internal network) DNS1: 192.168.250.3/24 WWW1: 192.168.250.4/24 Gateway1: 192.168.250.1 /24 (internal) :: 192.168.0.150 to 192.168.0.175 (external) DNS0:192.168.0.197/24 WWW0:192.168.0.197/24 Gateway0: 192.168.0.1 (internal) :: 69.94.x.x (external, dynamic ,isp control) Expected behavior: When using dig from internal (192.168.250.0/24) hosts, and query about domain from 192.168.0.197/16 nameserver's hosts (for which its authoritative) , it should return the ip address. Whats happening: After dig, answer section empty, the query is trying to access a.root server instead of 192.168.0.197 ,even though i have defined 192.168.0.197 as dns in gateway1's resolv.conf Why? I need this working asap, can anyone here help ?

    Read the article

  • Install Eclipse / StatET on Debian server for all users.

    - by Joris Meys
    I've manually downloaded, unpacked and installed the latest Eclipse (3.6.1) on a debian server (2.6.26-2-amd64). Eclipse can now be run by all users in our group, but when I tried to install the StatET plugin, I quickly found out that this one was only visible and useable for me. I have a sudo password on my account and a root password. I wondered if sudo eclipse was all I needed to do, but as I'm very new to the whole sysadmin thing (our old one is on "prolonged leave" and currently working in Spain) I rather check before blowing up the server. Any help on how to configure Eclipse for all users simultaneously is very much appreciated.

    Read the article

  • CHROOT for shell script testing

    - by Josh
    I am looking at setting up a shell script in order to properly document and automate the process I am using to setup a few servers we have. In order to test the shell script through its different stages I was thinking a CHROOT would be ideal, since I can wipe out the "virtual root" and create it on the fly. I have never used CHROOT before, however. I was just curious what are the exact steps I would need to follow to implement this process of creating a chroot (with the basic core functions that would be needed to install apache/php/etc.)? and then destroying it?

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >