Search Results

Search found 45752 results on 1831 pages for 'ubuntu linux'.

Page 550/1831 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • Trying to setup catchall mail forwarding on my rackspace's cloudspace server

    - by bob_cobb
    I'm running Ubuntu 12 Precise Pangolin and am trying to configure my server to catchall mail sent to it and forward it to my gmail address. I've been trying lots of examples online like editing my main.cf file which looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = destiny alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = destiny, localhost.localdomain, localhost relayhost = smtp.sendgrid.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 51200000 recipient_delimiter = + inet_interfaces = all inet_protocols = all In my /etc/postfix/virtual I have: @mydomain.com [email protected] @myotherdomain.com [email protected] Which isn't working when I email [email protected] or [email protected]. So I got the recommendation to add the following to my /etc/alias: postmaster:root root:[email protected] restarted postfix, and tried emailing [email protected] or [email protected] but it still won't send. Does anyone have any idea what I'm doing wrong here? I'd appreciate any help.

    Read the article

  • Internet Radio Station for University

    - by ryan
    I am trying to help my University Student Radio station rethink the setup of the way they stream music, but I have some questions regarding the use of Ubuntu to stream music. Currently, the radio station uses two windows machines: one of which is used to stream the radio station and serve the website, and the other is used by rotating djs to select songs and create playlists. The computer used by djs feeds mono into the sound card of the server and the server streams the feed online. -Ideally I would like to maintain a two-computer setup: One computer as server, and another that is used to select and play music by rotating djs. -I would like to use Ubuntu for the server. -I would like to use Windows for the other machine. -The server should be able to stream song information. First, is there a way to somehow get the song information from an analog feed? Second, what is the best streaming server for radio? I have encountered shoutcast, icecast, and darwin, but I don't know where to begin in attempting to gauge them. Finally, if anyone has any tips or pointers about small internet radio station management/ setup they would be appreciated as this is my first radio station, and I am eager to hear of past experiences.

    Read the article

  • Spammer relaying via Postfix mail server

    - by Paddington
    I have a Plesk 9.5 mail server (cm.snowbarre.co.za) on Ubuntu 8.04 LTS which forwards all SMTP traffic to an anti-spam server cacti.snowbarre.co.za. Many times I see the headers on the anti-spam server to contain from addresses not hosted on the mail server and I have checked and confirmed that my server is not an open relay server. How can a spammer be using my server to relay spam traffic? How can I stop this? Open relay test: paddington@paddington-MS-7387:~$ telnet cm 25 Trying 196.201.x.x... Connected to cm. Escape character is '^]'. 220 cm.snowbarre.co.za ESMTP Postfix (Ubuntu) mail from:[email protected] 250 2.1.0 Ok rcpt:[email protected] 221 2.7.0 Error: I can break rules, too. Goodbye. Connection closed by foreign host. paddington@paddington-MS-7387:~$ A typical headers is: *Received from cm.snowbarre.co.za (cm.snowbarre.co.za[196.201.x.x]) by cacti.snowbarre.co.za (Postfix) with ESMTPS id 00B601881AD; Mon, 27 Aug 2012 14:03:29 +0200 (SAST) Received from cm.snowbarre.co.za (localhost [127.0.0.1]) by cm.snowbarre.co.za (Postfix) with ESMTP id 81627367E007; Mon, 27 Aug 2012 14:02:50 +0200 (SAST) Received from User (ml82.128.x.x.multilinksg.com [82.128.x.x]) by cm.snowbarre.co.za (Postfix) with ESMTP; Mon, 27 Aug 2012 14:02:49 +0200 (SAST) Reply-To <[email protected]> From "Ms Nkeuri Aguiyi"<[email protected]> Subject Your Unpaid Fund. Date Mon, 27 Aug 2012 05:03:22 -0700 MIME-Version 1.0 Content-Type text/html; charset="Windows-1251" Content-Transfer-Encoding 7bit X-Priority 3 X-MSMail-Priority Normal X-Mailer Microsoft Outlook Express 6.00.2600.0000 X-MimeOLE Produced By Microsoft MimeOLE V6.00.2600.0000 X-Antivirus avast! (VPS 120821-0, 08/21/2012), Outbound message X-Antivirus-Status Clean Message-Id <[email protected]> To undisclosed-recipients:;*

    Read the article

  • How do I create a "here document" within a shell function?

    - by BenU
    I'm working my way through William Shotts Jr.'s great The Linux Command Line on my Mac OSX 10.7.5 system. 90% of the linux that Shotts covers is close enough to Darwin that I can figure out or GTEM to figure out what's going on. I've made it to chapter 27 on "Writing Shell Scripts" and am getting hung up creating "here files" within a function. I get an syntax error: unexpected end of file error when I include the following function: report_uptime () { cat <<- _EOF_ <H2>System Uptime</H2> <PRE>$(uptime)</PRE> _EOF_ return } The error goes away if I use the following function placeholder: report_uptime () { return } Also, elsewhere in the script, outside of a function I use the cat << _EOF_ format to create a "here file" with no trouble: cat << _EOF_ <HTML> <HEAD> <TITLE>$TITLE</TITLE> </HEAD> <BODY> <H1>$TITLE</H1> <P>$TIME_STAMP</P> $(report_uptime) $(report_disk_space) $(report_home_space) </BODY> </HTML> _EOF_ If anyone has any idea what I'm doing wrong I would be grateful!

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    Hello, I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • pptpd-logwtmp.so config wrong ip address

    - by rgc
    i have setup a pptp vpn on ubuntu 10.04, and i config local ip address , remote ip address as following: local IP address 10.0.30.100 remote IP address 192.168.13.100 after i use windows to connect to vpn server in windows , i only could send packets, can not receive any packets. i also check the pptp log in ubuntu server, and find this strange thing rcvd [IPCP ConfAck id=0x1 <compress VJ 0f 01> <addr 10.0.30.100>] rcvd [IPCP ConfReq id=0x2 <compress VJ 0f 01> <addr 192.168.13.100> <ms-dns1 202.119.32.6> <ms-dns2 202.119.32.12>] sent [IPCP ConfAck id=0x2 <compress VJ 0f 01> <addr 192.168.13.100> <ms-dns1 202.119.32.6> <ms-dns2 202.119.32.12>] Cannot determine ethernet address for proxy ARP local IP address 10.0.30.100 remote IP address 192.168.13.100 pptpd-logwtmp.so ip-up ppp0 **172.25.146.174** i did not set 172.25.146.174, it should be 192.168.13.100, can anyone tell me what should i do to fix this problem?

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Authenticate VNC session with ConsolKit?

    - by lori
    I have a linux machine running Fedora 16 in a cupboard. It has no screen or keyboard. I connect to it using a combination of vnc and ssh. Recently, after an update, I have had issues with authentication on the machine. If I vnc to it, the kde desktop pops up an error dialog every few minutes saying Authorization failed. Failed to obtain authentication. If I plug in a USB drive it fails to mount, Dolphin reports an authentication issue again. I have had limited success finding the solution. AFAICT, it is an issue with ConsoleKit deeming me to be a non-local user so it prevents authentication. This is the output from ck-list-sessions: $ ck-list-sessions Session5: unix-user = '1000' realname = 'steve' seat = 'Seat6' session-type = '' active = FALSE x11-display = ':1' x11-display-device = '' display-device = '' remote-host-name = '' is-local = FALSE on-since = '2012-09-16T08:07:03.137011Z' login-session-id = '1' I have tried to update my .vnc/xstartup script to include ck-launch-session as follows: $ cat ~/.vnc/xstartup #!/bin/sh exec ck-launch-session vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS export XKL_XMODMAP_DISABLE=1 OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec ck-launch-session /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec ck-launch-session sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources exec ck-launch-session xsetroot -solid grey exec ck-launch-session xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & exec ck-launch-session twm & This has not helped. How can I either authenticate myself to ConsoleKit, or trick it into believing I am a local user?

    Read the article

  • How to make ssh connection between servers using public-key authentication

    - by Rafael
    I am setting up a continuos integration(CI) server and a test web server. I would like that CI server would access web server with public key authentication. In the web server I have created an user and generated the keys sudo useradd -d /var/www/user -m user sudo passwd user sudo su user ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/var/www/user/.ssh/id_rsa): Created directory '/var/www/user/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /var/www/user/.ssh/id_rsa. Your public key has been saved in /var/www/user/.ssh/id_rsa.pub. However othe side, CI server copies the key to the host but still asks password ssh-copy-id -i ~/.ssh/id_rsa.pub user@webserver-address user@webserver-address's password: Now try logging into the machine, with "ssh 'user@webserver-address'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. I checked on the web server and the CI server public key has been copied to web server authorized_keys but when I connect, It asks password. ssh 'user@webserver-address' user@webserver-address's password: If I try use root user rather than my created user (both users are with copied public keys). It connects with the public key ssh 'root@webserver-address' Welcome to Ubuntu 11.04 (GNU/Linux 2.6.18-274.7.1.el5.028stab095.1 x86_64) * Documentation: https://help.ubuntu.com/ Last login: Wed Apr 11 10:21:13 2012 from ******* root@webserver-address:~#

    Read the article

  • Why is SSH finding remote keys for other accounts?

    - by Brian Pontarelli
    This is a strange issue I'm having with SSH from my Macbook Pro to a Linux (Ubuntu 11.10) server. I have a DSA key setup on the remote Linux server under my home directory like this: /home/me/.ssh/authorzied_keys I also have the same DSA key setup for a few other accounts on the machine named "foo" and "bar". I can log into all of the accounts fine without any password. Therefore, the DSA keys are all setup correctly. The strange behavior I'm seeing is when debugging the SSH connection. During the connection, the SSH debug is outputting this: debug2: key: /Users/me/.ssh/id_dsa (0x7f91a1424220) debug2: key: /home/foo/.ssh/id_dsa (0x7f91a1425620) debug2: key: /home/bar/.ssh/id_rsa (0x7f91a1425c60) debug2: key: /Users/me/.ssh/id_rsa (0x0) This is strange for so many reasons, but essentially, why is SSH listing out keys on the server (/home/foo/.ssh/id_dsa and /home/bar/.ssh/id_rsa)? These files don't even exist on the server, so why are they listed? I'm not logging into the "foo" or "bar" accounts, so why is SSH even listing those? On my Macbook Pro, I only have a DSA key, but SSH is listing out an RSA key, what's that all about? Another user on the server doesn't get any of these messages when they log in and they have the exact same setup for their DSA key and the exact same Macbook Pro setup as mine? Does anyone know what these messages are and why SSH is outputting them?

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Why won't my service start, and why doesn't upstart output any errors?

    - by Alex Waters
    I am trying to 'start gunicorn' as a service via upstart as user ale. I'm using gunicorn/flask on ubuntu 12.04 w/ init (upstart 1.5) Here is my /etc/init/gunicorn.conf setuid btw setgid flask script export HOME=/home/btw export WORKON_HOME=$HOME/.virtualenvs . $HOME/.virtualenvs/default/bin/activate cd $HOME/flask workon default gunicorn -c gunicorn.py bw:app end script It doesn't output anything other than gunicorn start/running, process 12992. If i then do 'status gunicorn' I get stop/waiting. any ideas on how to debug this? I tried following http://upstart.ubuntu.com/wiki/Debugging but it didn't help. If I do the following as user ale in the app's directory: 1. workon default 2. gunicorn -c gunicorn.py bw:app then Gunicorn runs fine. Here is ~/flask/gunicorn.py: bind = "0.0.0.0:8080" workers = 3 backlog = 2048 worker_class = "gevent" debug = True daemon = False pidfile ="/tmp/gunicorn.pid" log_level = "debug" accesslog = "/var/log/gunicorn/access.log" errorlog = "/var/log/gunicorn/error.log" user = "btw" group = "flask" Also, /var/log/error.log doesn't show anything new when I try to start the Gunicorn service. If I start it manually, it shows that the workers have been loaded, etc. Thanks for any help / suggestions!

    Read the article

  • Natting trafic from a tunnel to internet

    - by mezgani
    I'm trying to set up a GRE tunnel between a linux box and a router (LAN), and I'm having a few problems which seem to depend to my iptables configuration. Watching with tcpdump on linux box, I can see packets coming with flags GREv0, all i need right know is forwarding this data to internet, found here some trace : iptables -F iptables -X iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -t nat -F iptables -t nat -X iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -t nat -P OUTPUT ACCEPT iptables -t mangle -F iptables -t mangle -X iptables -t mangle -P PREROUTING ACCEPT iptables -t mangle -P OUTPUT ACCEPT iptables -A INPUT -p 47 -j ACCEPT iptables -A FORWARD -i ppp0 -o cloud -j ACCEPT iptables -A FORWARD -i cloud -o ppp0 -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE echo "1" /proc/sys/net/ipv4/ip_forward cloud Link encap:UNSPEC HWaddr C4-CE-7A-2E-F2-BF-DD-C0-00-00-00-00-00-00-00-00 inet adr:10.3.3.3 P-t-P:10.3.3.3 Masque:255.255.255.255 UP POINTOPOINT RUNNING NOARP MTU:1476 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:124 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:0 (0.0 B) TX bytes:10416 (10.1 KiB) Table de routage IP du noyau Destination Passerelle Genmask Indic MSS Fenêtre irtt Iface 196.206.120.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.3.3.0 0.0.0.0 255.255.255.0 U 0 0 0 cloud 0.0.0.0 196.206.120.1 0.0.0.0 UG 0 0 0 ppp0 root@aldebaran:~# ip route 196.206.120.1 dev ppp0 proto kernel scope link src 196.206.122.46 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.18 10.3.3.0/24 dev cloud scope link default via 196.206.120.1 dev ppp0

    Read the article

  • ssh without password does not work for some users

    - by joshxdr
    I have a new RHEL4 Linux box that I am using to copy data to old Solaris 2.6 and RHEL3 Linux boxes with scp. I have found that with the same setup, it works for some users but not for others. For user jane, this works fine: jane@host1$ ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/osborjo/.ssh/identity debug1: Offering public key: /mnt/home/osborjo/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). for user jack it does not: jack@host1 ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/oper1/.ssh/identity debug1: Offering public key: /mnt/home/oper1/.ssh/id_rsa debug1: Authentications that can continue: publickey,password,keyboard-interactive I have looked at the permissions for all the keys and files, they look the same. Since I am using home directories mounted by NFS, the keys for both the remote host and the local host are in the same directory. This is how things look for jane: jane@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jane operator 394 Jan 27 16:28 authorized_keys -rw------- 1 jane operator 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jane operator 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jane operator 1205 Jan 27 16:46 known_hosts For user jack: jack@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jack engineer 394 Jan 27 16:28 authorized_keys -rw------- 1 jack engineer 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jack engineer 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jack engineer 1205 Jan 27 16:46 known_hosts As a last ditch effort, I copied the authorized_keys, id_rsa, and id_rsa.pub from jill to jack, and changed the username in authorized_keys and id_rsa.pub with vi. It still did not work. It seems there is something different between the two users but I cannot figure out what it is.

    Read the article

  • VirtualBox communication from Linux to/from Windows 7

    - by J. Otto Tennant
    VirtualBox is running in Windows 7 as the host. VirtualBox has the two modifications (one is called Guest Additions; don't remember the other). The Virtual machine has "bridged" networking selected. I have SAMBA set up (now, the problem may be here; it has been three or four years since I last did this) on the Linux guest machine. Neither guest nor host sees the other. From the Windows 7 command prompt, the IP address of the Linux guest pings. The IP address of another computer (a separate Windows 7 on the wireless network) pings from the Linux guest. (I have no idea what IP address the Windows 7 host itself has. The output of "netstat" does not seem to be useful.) So, it seem to me that something should be working. The only workgroup on the LAN is inventively named WORKGROUP. SMB4K should be seeing something. There must be a simple setup step that I am missing. (FWIW, there are two processes running smbd, and no process is running nmbd. YaST says that nmbd is set to run. I am not sure what this means.)

    Read the article

  • publickey authentication only works with existing ssh session

    - by aaron
    publickey authentication only works for me if I've already got one ssh session open. I am trying to log into a host running Ubuntu 10.10 desktop with publickey authentication, and it fails when I first log in: [me@my-laptop:~]$ ssh -vv host ... debug1: Next authentication method: publickey debug1: Offering public key: /Users/me/.ssh/id_rsa ... debug2: we did not send a packet, disable method debug1: Next authentication method: password me@hosts's password: And the /var/log/auth.log output: Jan 16 09:57:11 host sshd[1957]: reverse mapping checking getaddrinfo for cpe-70-114-155-20.austin.res.rr.com [70.114.155.20] failed - POSSIBLE BREAK-IN ATTEMPT! Jan 16 09:57:13 host sshd[1957]: pam_sm_authenticate: Called Jan 16 09:57:13 host sshd[1957]: pam_sm_authenticate: username = [astacy] Jan 16 09:57:13 host sshd[1959]: Passphrase file wrapped Jan 16 09:57:15 host sshd[1959]: Error attempting to add filename encryption key to user session keyring; rc = [1] Jan 16 09:57:15 host sshd[1957]: Accepted password for astacy from 70.114.155.20 port 42481 ssh2 Jan 16 09:57:15 host sshd[1957]: pam_unix(sshd:session): session opened for user astacy by (uid=0) Jan 16 09:57:20 host sudo: astacy : TTY=pts/0 ; PWD=/home/astacy ; USER=root ; COMMAND=/usr/bin/tail -f /var/log/auth.log The strange thing is that once I've got this first login session, I run the exact same ssh command, and publickey authentication works: [me@my-laptop:~]$ ssh -vv host ... debug1: Server accepts key: pkalg ssh-rsa blen 277 ... [me@host:~]$ And the /var/log/auth.log output is: Jan 16 09:59:11 host sshd[2061]: reverse mapping checking getaddrinfo for cpe-70-114-155-20.austin.res.rr.com [70.114.155.20] failed - POSSIBLE BREAK-IN ATTEMPT! Jan 16 09:59:11 host sshd[2061]: Accepted publickey for astacy from 70.114.155.20 port 39982 ssh2 Jan 16 09:59:11 host sshd[2061]: pam_unix(sshd:session): session opened for user astacy by (uid=0) What do I need to do to make publickey authentication work on the first login? NOTE: When I installed Ubuntu 10.10, I checked the 'encrypt home folder' option. I'm wondering if this has something to do with the log message "Error attempting to add filename encryption key to user session keyring"

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

  • LDAP, Active Directory and bears, oh my!

    - by Tim Post
    What I have: Workstations running Ubuntu Jaunty mounting /home on a remote NFS server. User accounts are still created locally on each individual workstation. Workstations running Windows XP / Vista NFS server (as noted above) Windows 2008 server All machines share a single private network (LAN). What I need to accomplish: A single, intuitive (GUI driven) place for an office administrator to create user accounts. This should let anyone login to their (linux or windows) workstation, then fire up remote desktop and use the same login to the Windows 2008 server, from any machine on the network. I have read so much on samba, LDAP vs AD, etc and now I'm even more confused than I was before I began researching the problem. Ideally, Linux and Windows users should be able to get to their local files once logged into the Win2008 server. I am a programmer, not an interoperability guru and I'm completely lost on where to even start trying to accomplish this, plus I've run out of things to Google. How would you do this? Is it even possible?

    Read the article

  • Securing SSH/SFTP and best practices on security

    - by MultiformeIngegno
    I'm on a fresh VPS with Ubuntu Server 12.04. I wanted to ask you the good practices to apply to enhance security over a stock Ubuntu-server. This is what I did up to now: I added Google Authenticator to SSH, then I created a new user (whom I'll use instead of 'root' for SSH & SFTP access) which I added to my /etc/sudoers list below 'root', so now it's: # User privilege specification root ALL=(ALL:ALL) ALL new_user ALL=(ALL:ALL) ALL Then I edited sshd_config and set PermitRootLogin to 'no'. Then restarted the ssh service. Is this ok? There are a few things I'd like to ask you though: 1) What's the sense of adding a new (sudoer) user whilst the root user still exist (ok it can't access with root privilege but it's still there..)? 2) System files are owned by 'root'.. I want to use my new_user to access via SFTP but with it I can't edit those files!! Should I mass-CHMOD 'em so that new_user has write perms too? What's the good practice on this? Thanks in advance, I hope you'll tell me if I did something wrong and/or other ways to secure the system. :)

    Read the article

  • wget is working only when used with sudo

    - by Yusuf
    I'm having quite a strange behavior with wget since yesterday. I can download files by using sudo wget, but when I try the same file with only wget, I can get this error: yusufh@ubuntu-yuh:~$ wget http://www.kegel.com/wine/winetricks --2010-12-17 09:34:11-- http://www.kegel.com/wine/winetricks Resolving www.kegel.com... failed: Name or service not known. wget: unable to resolve host address `www.kegel.com' and with sudo wget: yusufh@ubuntu-yuh:~$ sudo wget http://www.kegel.com/wine/winetricks --2010-12-17 09:35:37-- http://www.kegel.com/wine/winetricks Connecting to 127.0.0.1:5865... connected. Proxy request sent, awaiting response... 200 OK Length: 190672 (186K) [text/plain] Saving to: `winetricks' 100%[==================================================================================================>] 190,672 --.-K/s in 0.03s 2010-12-17 09:35:37 (6.92 MB/s) - `winetricks' saved [190672/190672] After the comments below, here is an update: I can use Google Chrome or Firefox perfectly without running it as root. I use ntlmaps to connect to the office proxy. So I need to use 127.0.0.1:5865 as the proxy for clients. Result for env | grep -i proxy: NO_PROXY=localhost,127.0.0.0/8,*.local, http_proxy=127.0.0.1:5865 ftp_proxy=127.0.0.1:5865 all_proxy=socks://127.0.0.1:5865/ ALL_PROXY=socks://127.0.0.1:5865/ https_proxy=127.0.0.1:5865 no_proxy=localhost,127.0.0.0/8,*.local while sudo env | grep -i proxy is empty! HELP!

    Read the article

  • Free tiered storage automation in linux?

    - by NginUS
    I have a couple virtualized fileservers running in QEMU/KVM on ProxmoxVE. The physical host has 4 storage tiers with significant performance variances. They're attached both locally and via NFS. These will be provided to the fileserver(s) as local disks, abstracted into pools, and handling multiple streams of data for the network. My aim is for this abstraction layer to intelligently pool the tiers. There's a similar post on the site here: Home-brew automatic tiered storage solutions with Linux? (Memory - SSD - HDD - remote storage) in which the accepted answer was a suggestion to abandon a linux solution for NexentaStor. I like the idea of running NexentaStor. It almost fits the bill. NexentaStor provides Hybrid Storage Pools, and I love the idea of checksumming. 16TB without incurring licensing fees is a huge plus as well. After the expense of the hardware, free is about all my budget can handle. I don't know if zfs pools are adaptive or dynamically allocated based on load, but it becomes irrelevant since NexentaStor doesn't support virtio network or block drivers, which is a must in my environment. Then I saw a commercial solution called SmartMove: http://www.enigmadata.com/smartmove.html And it looks like a step in the right direction, but I'm so broke I'd be wasting their time to even ask for a quote, so I'm looking for another option. I'm after a linux implementation that supports virtio drivers, and I'm at a loss as to which software is up to it.

    Read the article

  • PHP sessions currupt

    - by Baversjo
    Using symfony framework 1.4 I have created a website. I'm using sfguard for authentication. Now, this is working great on WAMP (windows). I can login to several accounts on different browsers and use the website. I have ubuntu server 9.10 running apache (everything up to date and default configuration). On my server, when I login to the website in one browser it works great. When I on my other computer login with another user account on the public website, the login is successful. But when I refresh/go to another page the first user is shown as logged in instead! Also, when I press logout, It's not showing that I'm logged out after page load. When I press f5 again I'm logged out. As mentioned, all this works as expected on my local installation. I'm thinking there something wrong with my PHP session configuration on my ubuntu server, but I've never touched it.. Please help me. This is a school project and I'm presenting it today :(

    Read the article

  • How can I recover my system after running 'mkfs' on the system partition?

    - by Filip Podgórny
    I am not a Linux user, and was doing some homework, I blindly typed sudo mkfs ext3 dev/sda2 (I had Ubuntu as Windows installation). I've done few more things, and turned Ubuntu off to switch on Windows back. No operating system installed - this is the message I'm getting. I plugged my HDD onto another computer and all my files are still there. What should I do to get my windows installation back? df -l (before mkfs) /dev/loop0 29G 2,0G 27G 8% / udev 3,0G 4,0K 3,0G 1% /dev tmpfs 1,2G 900K 1,2G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,0G 1,3M 3,0G 1% /run/shm /dev/sda3 455G 123G 333G 27% /host /dev/sdb1 1,9G 820M 1,1G 43% /media/PHONE CARD mkfs output (polish, sorry) mke2fs 1.41.14 (22-Dec-2010) Etykieta systemu plików= Typ OS: Linux Rozmiar bloku=1024 (log=0) Rozmiar fragmentu=1024 (log=0) Stride=0 bloków, szerokosc Stripe=0 bloków 25688 i-wezlów, 102400 bloków 5120 bloków (5.00%) zarezerwowanych dla superuzytkownika Pierwszy blok danych=1 Maksymalna liczba bloków systemu plików=67371008 13 grup bloków 8192 bloków w grupie, 8192 fragmentów w grupie 1976 i-wezlów w grupie Kopie zapasowe superbloku zapisane w blokach: 8193, 24577, 40961, 57345, 73729 Zapis tablicy i-wezlów: zakonczono Tworzenie kroniki (4096 bloków): wykonano Zapis superbloków i podsumowania systemu plików: wykonano Ten system plików bedzie automatycznie sprawdzany co kazde 30 montowan lub co 180 dni, zaleznie co nastapi pierwsze. Mozna to zmienic poprzez tune2fs -c lub -i.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >