Search Results

Search found 29876 results on 1196 pages for 'ubuntu 8 04 lts'.

Page 368/1196 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • KVM virtual machine networking, NAT and bridge together

    - by stoqlt
    I have two running KVM guests on an Ubuntu (Lucid) host. One of them uses the simplest NAT method, and DHCP inside. The other uses the bridge method and static IP inside. Both work fine. Can I mix the networking methods? I'd like to create some set of scripts which used the local 192.168.122.x address, no matter if the guest has or not has an additional bridged LAN interface. Having eth0 and eth1 interfaces inside would be fine. Thanks for your interest.

    Read the article

  • Error "fileid changed" when accessing files over NFS

    - by Roman Prikhodchenko
    I have an nfs-kernel-server configured and running on Ubuntu 10.04 Server. /export THIRD_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) SECOND_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) /export/ebs THIRD_SERVER_IP(rw,fsid=0,insecure,no_subtree_check,async) SECOND_SERVER_IP(rw,nohide,insecure,no_subtree_check,async) I mounted the exported folder to the second server: mount -t nfs4 -o proto=tcp,port=2049 NFS_SERVER_IP_HERE:/ebs /ebs and it works just fine. I mounted it to the third server but I cannot access files from it. ls -l /ebs ls: reading directory /ebs: Stale NFS file handle total 0 The syslog on the third server says: kernel: [11575.483720] NFS: server NFS_SERVER_IP_HERE error: fileid changed kernel: [11575.483722] fsid 0:14: expected fileid 0x2, got 0x6e001 Some info: uname -r 2.6.32-312-ec2 uname -m i686

    Read the article

  • /etc/environment not being read into PATH variable

    - by Dan
    In Ubuntu the path variable is stored in /etc/environment. This is mine (I've made no changes to it, this is the system default): $ cat /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" but when I examine my PATH variable: $ echo $PATH /home/dan/bin:/home/dan/bin:/bin:/usr/bin:/usr/local/bin:/usr/bin/X11 You'll notice /usr/games is missing (it was there up until a few days ago). My /etc/profile makes no mention of PATH. My ~/.profile is the default and only has: if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi This only happens in gnome, not in tty1-6. This is missing from both gnome terminal and when I try to call applications from the applications dropdown. Anyone know what could be causing this? Thanks.

    Read the article

  • PHP stop working after a server reboot

    - by Ido Bukin
    I reboot my server and suddenly the PHP-FastCGI stop working . I try to do - /etc/init.d/php-fastcgi restart Also i try to restart my Nginx : /etc/init.d/php-fastcgi restart How can i turn on my PHP again ?? My server run - Ubuntu 11.10 Nginx 1.2.3 MySQL PHP-FastCGI Also i want to ask it is possible that i have 2 Nginx installs on my server and they run in the same time ? when i check the Nginx version in the console its says that the version is 1.2.3 And when i go to my site i see - 502 Bad Gateway nginx/1.0.5 How can i fix this ?

    Read the article

  • How to create a Linux user without a password but being able to set it?

    - by Leonid Shevtsov
    I have a username and an SSH key for a (hypothetical) guy and I need to give him admin access to a Linux (Ubuntu) server. I want him to be able to log in via SSH and then set his password by himself over a secure connection, instead of passing the password around. I know how to make the password expire and force him to reset it on first login. But this doesn't work unless he has some password already, which I then have to tell him. I thought about making the password blank - SSH wouldn't allow login, but then anyone can su into the user. My question is, is there some best practice to creating accounts in such a way? Or setting a default password is unavoidable?

    Read the article

  • How can I tell which interface my Supemicro IPMI is piggybacking on?

    - by lorin
    I've used IPMI before, but only on servers where the IPMI interface had a dedicated ethernet port. I've got an Ubuntu 10.04 server with two ethernet cards, which is supposed to have an IPMI interface on it (the motherboard is a Supermicro H8DMR-I2). From what I understand, the IPMI interface is piggybacking on one of the two NICs. Is there any way I can tell which NIC the IPMI interface is piggybacking on? Using ipmitool I've tried to set the IP address on the IPMI interface for the subnet for eth0, and then the subnet for eth1, and it's never reachable. (Can you even reach an IPMI interface from the same NIC it's piggybacking off of, or do you need to try connecting from a different machine on the network?) Also, is there anything special I need to do to enable it? I can access the IPMI interface locally using "ipmitool".

    Read the article

  • Unable to SSH to EC2

    - by Walker
    I downloaded the cert-xxx.pem and pk-xxx.pem files and also the keypair.pem and moved it all to the /.ssh folder on my Ubuntu client machine. this is what I get when I try to SSH with -v at the end debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). I am new to administering servers and I want to know if I should be trying to convert the pem files to id_rsa and id_dsa. I am not really sure if that is possible but I don't know how else to get the id_rsa, id_dsa from those pem files or if there is any work around. I managed to get access to EC2 the first time and this is my second try and I am unsuccessful so far. Any help is appreciated. regards Walker

    Read the article

  • Rackspace copy script failure message "java.net.SocketTimeoutException: Read timed out"

    - by user53864
    I am using Rackspace for Ubuntu cloud server. Everyday a script(I guess the script is from rackspace) executes on the cloud servers which copies the backfile to the Rackspace CloudFiles and sends the mail as if the files are copied and I've scheduled the script on the cloud servers. I've no much knowledge of the script and I guess the script is based on Cruise(as I could see build.xml, some jar files ...). Everyday the files are copied to the Rackspace from cloud servers but sometimes don't know why, the files will be copied to Rackspace sending an error failure message or sometimes the files will not be copied and sends the error failure message like the one below. Error while backing up on Station1 on 03/03/2011 04:50 AM and reason for error is java.net.SocketTimeoutException: Read timed out Anybody using Rackspace?, anybody has any fix for this?

    Read the article

  • opennebula VM submission failure

    - by user61175
    I am new to OpenNebula, the cloud is up and running but the VM is failed to be submitted to a node. I got the following error from the log file. ERROR: Command "scp ubuntu:/opt/nebula/images/ttylinux.img node01:/var/lib/one/8/images/disk.0" failed. ERROR: Host key verification failed. Error excuting image transfer script: Host key verification failed. The key verification keeps failing. I need to know what is going wrong ... thanks :)

    Read the article

  • How to configure Apache2 to host Django and PHP on multiple domains simultaneously?

    - by Bert B.
    I have a VPS (Ubuntu 10.04) that hosts multiple domains, one of them being a CodeIgniter (PHP) web app. The others are just static websites, no fancy backend languages required. Well I am starting a new project and want to use Django. I have Django installed, mod_wsgi enabled in Apache2, but when I did the first steps on the documentation (https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/modwsgi/) it seemingly overwrote my existing Apache2 configuration and served up the Django welcome page to all my domains. What should my httpd.conf file should look like so that it doesn't overtake all my domains.

    Read the article

  • Requiring 802.1x login before allowing access to network resources

    - by Calvin Froedge
    I have a ZyXel GS2200-24 managed switch, and a free-radius server running on Ubuntu 11.10. Radius is configured and when I log into the switch the authentication goes through Radius. Now, I'm trying to ensure that access to web resources (as an example, I set up a web server on the ip 192.168.1.2) requires first authenticating with radius, before the switch will allow the connection. Am I correct that this should be handled at the switch level? What are these rules usually called / how are they usually defined?

    Read the article

  • forward all mail on a specified domain to script

    - by David
    Hey all! I run a disposable e-mail service that accepts all incoming mail and forwards it to a PHP script that stores it in a database for people to view. Before now, I have been on shared hosting with cPanel, which makes it easy to pipe e-mails to a script. Now, however, I got my own VPS, and it doesn't have cPanel. How do I pipe e-mails to script? Further, how do I pipe emails to any address on certain specified domains to my script? You see, aside from the main domain, there are several alternate domains that people can use if the main domain is blocked, and on each domain I want any address to be usable (xyz@domain, abc@domain, anythingelse@domain). The VPS has Ubuntu 9.04 installed, and I have been experimenting with Postfix, though I can switch to Exim or Sendmail if it is easier.

    Read the article

  • What is the simplest way to build your own .deb package?

    - by Calvin Fisher
    Having used Ubuntu for several years now, I've assembled a short list of scripts and packages that I always install on my computers. I would like to pack them up into a .deb to make it easier to get set up on a fresh OS installation. I'm imagining, for instance, one package that would install all of my custom BASH scripts that I've made for common tasks, and another one that would depend on other packages (like w64codecs) that I always install but forget that I need to until I go to do something and it's not there. It doesn't even have to be by-the-book; I'm not looking to deploy these publicly. I'm just looking to roll up all these tasks into one sudo dpkg --install. To quantify "simple" or "easy," I mean to say that I'm looking for the method with the fewest steps requiring the least technical knowledge and, most importantly, taking the least time.

    Read the article

  • Filezilla/Puttygen doesn't recognize private key file

    - by devzoner
    I have generated a key for an Ubuntu Virtual Machine running on Azure Cloud Services http://www.windowsazure.com/en-us/manage/linux/how-to-guides/ssh-into-linux/ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem When loading the private key into Filezilla, it asks me to convert the format, however, when converting the key it fails, the same happens with puttygen from linux console, using this: puttygen myPrivateKey.key -o myKey.ppk In both cases I have the following error: puttygen: error loading `myPrivateKey.key': unrecognised key type By the way, this key doesn't have a passphrase. I found an old thread about it, but I'm using 0.6.3 version which is newer than what this thread recommends: http://fixunix.com/ssh/541874-puttygen-unable-import-openssh-key.html I've managed to solve this issue by using another gui client Fugu for Mac, but one of my co-worker uses windows and I still have to figure this out. Since Filezilla is the de-facto ftp client, I thought it would be easier to solve it there. Thanks

    Read the article

  • Apache hanging with MaxClients is reached

    - by Ash White
    My Apache 2.2 (preform MPM) is hanging when MaxClients is reached, rather than queueing up requests and serving them when child processes become free. When this happens, the web server is totally unresponsive until it is manually restarted. The server stack is Ubuntu 8, MySQL 5, PHP 5. Hardware is Dual Xeons (2.8) with 2GB of RAM. It serves 30,000 - 50,000 pageviews per day. Static images, CSS, and JS are offloaded to a separate server and PHP is cached using eAccelerator. The HTML output of many pages is cached to the filesystem. Relevant Apache directives: KeepAlive On MaxKeepAliveRequests 50 KeepAliveTimeout 2 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 2000

    Read the article

  • xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory)

    - by mazgalici
    root@mazgalici:~# startx X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-28-server i686 Ubuntu Current Operating System: Linux mazgalici 2.6.18-194.26.1.el5.028stab079.2PAE #1 SMP Fri Dec 17 19:34:22 MSK 2010 i686 Kernel command line: quiet Build Date: 10 November 2010 11:25:26AM xorg-server 2:1.7.6-2ubuntu7.4 (For technical support please see ) Current version of pixman: 0.16.4 Before reporting problems, check to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jan 11 01:28:48 2011 (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory) Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log

    Read the article

  • No icons in top panel notification area

    - by PeterOakland
    (writing here because my reply to a previous post is marked "deleted by diago") Ubuntu 9.10 Karmic I have tried removing the notification area from the top panel and reinstalling it, and still I have nothing in it: no volume control, no connection icon. I have put a notification area on the bottom panel, and the only icon appearing is the connection one, looks like two plugs meeting, or a large resistor in line, or two toilet plungers face to face, so to speak. How can I get this up top where it used to be, along with the volume control icon? Thanks for any help.

    Read the article

  • Nginx & Passenger - failed (11: Resource temporarily unavailable) while connecting to upstream

    - by Toby Hede
    I have an Nginx and Passenger setup that is proving problematic. At relatively low loads the server seems to get backed up and start churning results like this into the error.log: connect() to unix:/passenger_helper_server failed (11: Resource temporarily unavailable) while connecting to upstream My passenger setup is: passenger_min_instances 2; passenger_pool_idle_time 1200; passenger_max_pool_size 20; I have done some digging, and it looks like the CPU gets pegged. Memory usage seems fine passenger_memory_stats shows at most about 700MB being used, but CPU approaches 100%. is this enough to cause this type of error? Should i bring the pool size down? Are there other configuration settings I should be looking at? Any help appreciated Other pertinent information: Amazon EC2 Small Instance Ubuntu 10.10 Nginx (latest stable) Passenger (latest stable) Rails 3.0.4

    Read the article

  • Why doesn't update-manager allow me to upgrade distribution?

    - by spoulson
    I have an Ubuntu 9.04 PC behind a corporate firewall and proxy server. This requires that in order to get update-manager to fetch and apply updates, I must set the proxy and authentication settings in the Synaptic network configuration. Once done, I can check for updates and things work smoothly (except I don't get popup notifications of new updates, must manually check periodically). However, distribution updates just don't show up in update-manager, such as the newly released 9.10 Karmic Koala. I had the same issue in upgrading 8.10 to 9.04 and solved it by downloading and upgrading from the 9.04 ISO. What do I need to do to upgrade to 9.10 using the standard update-manager UI?

    Read the article

  • linux ssh -X graphical applications will not start when system load is high

    - by Chrisv
    So I am using ssh -X to access a server. I am at a Xubuntu desktop accessing a Ubuntu server that is in the next room. Usually everything works fine, but when the system load gets high, any graphical applications I have freeze and fail to be restarted. This happens even if the process that is causing the high load has been niced to a low priority with "nice -n 19". And even though the system load is high, the command line works fine with no delay, and other applications I have running on the server (e.g. virtual machines) run fine. But any graphical application running through X dies. When the graphical applications fail they usually give out an error message that suggests a time-out. It seems that something connected to X has a low priority and times out. But what is it, and how does one fix it?

    Read the article

  • Grub2 fails to chainload Windows 7 with error "invalid signature"

    - by atomicpirate
    I've built a new UEFI 64-bit system with both Windows 7 and Ubuntu 11.10 installed (on separate hard drives). I'd like to be able to boot Windows 7 from the grub menu, but I have so far been unsuccessful in getting grub to chainload it. After getting the grub menu, I choose the option for the command line and I can see that bootmgfw.efi is at (hd1,gpt1)/efi/Microsoft/Boot/bootmgfw.efi. However, when I attempt to chainload I get an error: grub> chainloader (hd1,gpt1)/efi/Microsoft/Boot/bootmgfw.efi error: invalid signature I am not sure whether I chose the UEFI boot option when I installed Linux from the LiveCD, and so I am wondering if the grub I have is perhaps unable to chainload in this manner? In any case I am not sure how to get the chainload to work.

    Read the article

  • SSH Login to an EC2 instance failing with previously working keys...

    - by Matthew Savage
    We recently had an issues where I had rebooted our EC2 instance (Ubuntu x86_64, version 9.10 server) and due to an EC2 issue the instance needed to be stopped and was down for a few days. Now I have been able to bring the instance back online I cannot connect to SSH using the keypair which previously worked. Unfortunately SSH is the only way to get into this server, and while I have another system running in its place there are a number of things I would like to try and retrieve from the machine. Running SSH in verbose mode yields the following: [Broc-MBP.local]: Broc:~/.ssh ? ssh -i ~/.ssh/EC2Keypair.pem -l ubuntu ec2-xxx.compute-1.amazonaws.com -vvv OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /Users/Broc/.ssh/config debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-xxx.compute-1.amazonaws.com [184.73.109.130] port 22. debug1: Connection established. debug3: Not a RSA1 key file /Users/Broc/.ssh/EC2Keypair.pem. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /Users/Broc/.ssh/EC2Keypair.pem type -1 debug3: Not a RSA1 key file /Users/Broc/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /Users/Broc/.ssh/id_rsa type 1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 123/256 debug2: bits set: 500/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /Users/Broc/.ssh/known_hosts debug3: check_host_in_hostfile: match line 106 debug3: check_host_in_hostfile: filename /Users/Broc/.ssh/known_hosts debug3: check_host_in_hostfile: match line 106 debug1: Host 'ec2-xxx.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/Broc/.ssh/known_hosts:106 debug2: bits set: 521/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/Broc/.ssh/id_rsa (0x100125f70) debug2: key: /Users/Broc/.ssh/EC2Keypair.pem (0x0) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /Users/Broc/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/Broc/.ssh/EC2Keypair.pem debug1: read PEM private key done: type RSA debug3: sign_and_send_pubkey debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). [Broc-MBP.local]: Broc:~/.ssh ? So, right now I'm really at a loss and not sure what to do. While I've already got another system taking the place of this one I'd really like to have access back :|

    Read the article

  • Permissions problem on mounting afp drive

    - by Ron Gejman
    I am trying to mount a network drive via AFP on an Ubuntu 10.04 server machine. After installing AFP support, I use the following command: sudo mount_afp afp://USER:[email protected]/directory/ /media/dir This seems to work and it tells me that mounting succeeded. However, when I navigate to /media/dir I get the following error: cd: cfs: Input/output error Permissions in /media are: d????????? ? ? ? ? dir/ drwx------ 12 user 4.0K 2010-10-25 16:08 otherdisk/ So there is a permissions problem here. I eventually want to mount this drive automatically using fstab. What do I need to do to make the disk accessible?

    Read the article

  • Bash color prompt and long commands

    - by Eric J.
    I'm colorizing parts of my bash prompt using ANSI escape sequences. This works great, until the command I'm currently typing in is long enough that it has to wrap. Instead of the rest of the command displaying on the next line, it wraps back to column 1 of the current line, overwriting the beginning of the prompt. I get that behavior with this prompt: export PS1="[\u][\033[0;32;40mdemo \033[0;33;40m1.5.40.b\033[0;37;40m] \w> \033[0m" but it works correctly with the same prompt, ANSI sequences remove: export PS1="[\u][demo 1.5.40.b] \w> " I'm connecting using the current version of Putty, with default Putty settings. The OS is Ubuntu 8.10.

    Read the article

  • Low-cost, Flexible Log Aggregation [closed]

    - by Dan McClain
    I'm starting to have quite the collection of Ubuntu VMs that I must manage. I'm starting to investigate Puppet for managing the configuration of all of them, and apticron to let me know what's out of date. But the issue I feel I should deal with sooner than later is log aggregation. I'd like to stay in the free/open source realm for now, seeing that we don't have much budget for something like splunk yet. In addition to syslog, I would like to collect application specific logs (We are running different apps on different machines, from nginx+passenger for rails, to Apache+Tomcat for java, to PHP for expression engine, and mysql/postgresql database server), so that we can analyze the relavent data. For now, I'm just looking to get all the logs one place.

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >