Search Results

Search found 31206 results on 1249 pages for 'version detection'.

Page 480/1249 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Upon reboot, Linux software raid fails to include one device of a RAID1 array

    - by user1389890
    One of my four Linux software raid arrays drops one of its two devices when I reboot my system. The other three arrays work fine. I am running RAID1 on kernel version 2.6.32-5-amd64 (Debian Squeeze). Every time I reboot, /dev/md2 comes up with only one device. I can manually add the device by saying $ sudo mdadm /dev/md2 --add /dev/sdc1. This works fine, and mdadm confirms that the device has been re-added as follows: mdadm: re-added /dev/sdc1 After adding the device and allowing the array time to resynch, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdc1[0] sdd1[1] 732574464 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> Then after I reboot, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdd1[1] 732574464 blocks [2/1] [_U] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> During reboot, here is the output of $ sudo cat /var/log/syslog | grep mdadm : Jun 22 19:00:08 rook mdadm[1709]: RebuildFinished event detected on md device /dev/md2 Jun 22 19:00:08 rook mdadm[1709]: SpareActive event detected on md device /dev/md2, component device /dev/sdc1 Jun 22 19:00:20 rook kernel: [ 7819.446412] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446415] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446782] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446785] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515844] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515847] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606829] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606832] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855616] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855620] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855950] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855952] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962169] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962171] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054365] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054368] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588662] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588664] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601990] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601991] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602693] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602695] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605981] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605983] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606138] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606139] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:48 rook mdadm[1737]: DegradedArray event detected on md device /dev/md2 Here is the result of $ cat /etc/mdadm/mdadm.conf: ARRAY /dev/md0 metadata=0.90 UUID=92121d42:37f46b82:926983e9:7d8aad9b ARRAY /dev/md1 metadata=0.90 UUID=9c1bafc3:1762d51d:c1ae3c29:66348110 ARRAY /dev/md2 metadata=0.90 UUID=98cea6ca:25b5f305:49e8ec88:e84bc7f0 ARRAY /dev/md3 metadata=1.2 name=rook:3 UUID=ca3fce37:95d49a09:badd0ddc:b63a4792 Here is the output of $ sudo mdadm -E /dev/sdc1 after re-adding the device and letting it resync: /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Array Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 5fd6cc13 - correct Events : 180998 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 Here is the output of $ sudo mdadm -D /dev/md2 after re-adding the device and letting it resync: /dev/md2: Version : 0.90 Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Array Size : 732574464 (698.64 GiB 750.16 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Events : 0.180998 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 I also ran $ sudo smartctl -t long /dev/sdc and no hardware issues were detected. As long as I do not reboot, /dev/md2 seems to work fine. Does anyone have any suggestions?

    Read the article

  • Can't start httpd 2.4.9 with self-signed SSL certificate

    - by Smollet
    I cannot start the httpd 2.4.9 (tried 2.4.x too) on CentOS 6.5 with the simplest SSL config possible. The openssl version installed on the machine is OpenSSL 1.0.1e-fips 11 Feb 2013 (I've upgraded it using 'yum update' to the latest patched version as well) I have compiled and installed the httpd 2.4.9 using the following commands: ./configure --enable-ssl --with-ssl=/usr/local/ssl/ --enable-proxy=shared --enable-proxy_wstunnel=shared --with-apr=apr-1.5.1/ --with-apr-util=apr-util-1.5.3/ make make install Now I'm generating the default self-signed certificate as described in the CentOS HowTo: openssl genrsa -out ca.key 2048 openssl req -new -key ca.key -out ca.csr openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt cp ca.crt /etc/pki/tls/certs cp ca.key /etc/pki/tls/private/ca.key cp ca.csr /etc/pki/tls/private/ca.csr Here is my httpd-ssl.conf file: Listen 443 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLPassPhraseDialog builtin SSLSessionCache "shmcb:/usr/local/apache2/logs/ssl_scache(512000)" SSLSessionCacheTimeout 300 <VirtualHost *:443> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory "/usr/local/apache2/cgi-bin"> SSLOptions +StdEnvVars </Directory> BrowserMatch "MSIE [2-5]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 CustomLog "/usr/local/apache2/logs/ssl_request_log" \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" </VirtualHost> when I start httpd using bin/apachectl -k start I get following errors in the error_log: Wed Jun 04 00:29:27.995654 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:29:27.995726 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.995863 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996111 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:29:27.996122 2014] [ssl:info] [pid 24021:tid 139640404293376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:29:27.996209 2014] [ssl:info] [pid 24021:tid 139640404293376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:29:27.996280 2014] [ssl:debug] [pid 24021:tid 139640404293376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:29:27.996295 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:29:27.996303 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: DH PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996308 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:0906D06C:PEM routines:PEM_read_bio:no start line (Expecting: EC PARAMETERS) -- Bad file contents or format - or even just a forgotten SSLCertificateKeyFile? [Wed Jun 04 00:29:27.996318 2014] [ssl:emerg] [pid 24021:tid 139640404293376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:29:27.996321 2014] [ssl:emerg] [pid 24021:tid 139640404293376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I then try to generate missing DH PARAMETERS and EC PARAMETERS: openssl dhparam -outform PEM -out dhparam.pem 2048 openssl ecparam -out ec_param.pem -name prime256v1 cat dhparam.pem ec_param.pem >> /etc/pki/tls/certs/ca.crt And it mitigates the error but the next comes out: [Wed Jun 04 00:34:05.021438 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01887: Init: Initializing (virtual) servers for SSL [Wed Jun 04 00:34:05.021487 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.021874 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022050 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_util_ssl.c(343): AH02412: [192.168.9.128:443] Cert matches for name '192.168.9.128' [subject: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / issuer: CN=192.168.9.128,OU=XXX,O=XXXX,L=XXXX,ST=NRW,C=DE / serial: AF04AF31799B7695 / notbefore: Jun 3 22:26:45 2014 GMT / notafter: Jun 3 22:26:45 2015 GMT] [Wed Jun 04 00:34:05.022066 2014] [ssl:info] [pid 24089:tid 140719371077376] AH02568: Certificate and private key 192.168.9.128:443:0 configured from /etc/pki/tls/certs/ca.crt and /etc/pki/tls/private/ca.key [Wed Jun 04 00:34:05.022285 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1016): AH02540: Custom DH parameters (2048 bits) for 192.168.9.128:443 loaded from /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022389 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(1030): AH02541: ECDH curve prime256v1 for 192.168.9.128:443 specified in /etc/pki/tls/certs/ca.crt [Wed Jun 04 00:34:05.022397 2014] [ssl:info] [pid 24089:tid 140719371077376] AH01914: Configuring server 192.168.9.128:443 for SSL protocol [Wed Jun 04 00:34:05.022464 2014] [ssl:debug] [pid 24089:tid 140719371077376] ssl_engine_init.c(312): AH01893: Configuring TLS extension handling [Wed Jun 04 00:34:05.022478 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02572: Failed to configure at least one certificate and key for 192.168.9.128:443 [Wed Jun 04 00:34:05.022488 2014] [ssl:emerg] [pid 24089:tid 140719371077376] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [Wed Jun 04 00:34:05.022491 2014] [ssl:emerg] [pid 24089:tid 140719371077376] AH02312: Fatal error initialising mod_ssl, exiting. AH00016: Configuration Failed I have tried to generate the simple certificate/key pair exactly as described in the httpd docs Unfortunately, I still get exact same errors as above. I've seen a bug report with the similar issue: https://issues.apache.org/bugzilla/show_bug.cgi?id=56410 But the openssl version I have is reported as working there. I've also tried to apply the patch from the report as well as build the latest 2.4.x branch with no success, I get the same errors as above. I have also tried to create a short chain of certificates and set the root CA certificate using SSLCertificateChainFile directive. That didn't help either, I get exact same errors as above. I'm not interested in setting up hardened security, etc. The only thing I need is to start httpd with the simplest SSL config possible to continue testing proxy config for the mod_proxy_wstunnel Had anybody encountered and solved this issue? Is my sequence for creating a self-signed certificate incorrect? I'd appreciate any help very much!

    Read the article

  • PSTN Trunk TDM400P Install on Asterisk / Trixbox

    - by Jona
    Hey All, I'm trying to get a TDM400P card with FXO module to connect to our PSTN line. The card is correctly detected by Linux: [trixbox1.localdomain asterisk]# lspci 00:09.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface And asterisk can see the channel: > trixbox1*CLI> dahdi show channel 1 > Channel: 1LI> File Descriptor: 14 > Span: 11*CLI> Extension: I> Dialing: > noI> Context: from-pstn Caller ID: I> > Calling TON: 0 Caller ID name: > Mailbox: none Destroy: 0LI> InAlarm: > 1LI> Signalling Type: FXS Kewlstart > Radio: 0*CLI> Owner: <None> Real: > <None>> Callwait: <None> Threeway: > <None> Confno: -1LI> Propagated > Conference: -1 Real in conference: 0 > DSP: no1*CLI> Busy Detection: no TDD: > no1*CLI> Relax DTMF: no > Dialing/CallwaitCAS: 0/0 Default law: > ulaw Fax Handled: no Pulse phone: no > DND: no1*CLI> Echo Cancellation: > trixbox1128 taps trixbox1(unless TDM > bridged) currently OFF Actual > Confinfo: Num/0, Mode/0x0000 Actual > Confmute: No > Hookstate (FXS only): Onhook I have configured a "ZAP Trunk (DAHDI compatibility Mode)" with the ZAP identifier 1 and an outbound route, but when ever I try to make an external call via it I get the "All Circuits are busy now, please try your call again later message". The FXO module is directly connected to our phone line from BT via a BT-RJ11 cable. I'm guessing I've missed a configuration step somewhere but no idea where, any help greatly appreciated.

    Read the article

  • VirtualBox guest responds to ping but all ports closed in nmap

    - by jeremyjjbrown
    I want to setup a test database on a vm for development purposes but I cannot connect to the server via the network. I've got Ubuntu 12.04vm installed on 12.04 host in Virtualbox 4.2.4 set to - Bridged network mode - Promiscuous Allow All When I try to ping the virtual guest from any network client I get the expected result. PING 192.168.1.209 (192.168.1.209) 56(84) bytes of data. 64 bytes from 192.168.1.209: icmp_req=1 ttl=64 time=0.427 ms ... Internet access inside the vm is normal But when I nmap it I get nothin! jeremy@bangkok:~$ nmap -sV -p 1-65535 192.168.1.209 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:39 CST Nmap scan report for jeremy (192.168.1.209) Host is up (0.0032s latency). All 65535 scanned ports on jeremy (192.168.1.209) are closed Service detection performed. Please report any incorrect results at http://nmap.org/submit/ Nmap done: 1 IP address (1 host up) scanned in 0.88 seconds ufw and iptables on VM... jeremy@jeremy:~$ sudo service ufw stop [sudo] password for jeremy: ufw stop/waiting jeremy@jeremy:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have scanned around and have no reason to believe that my router is blocking internal ports. jeremy@bangkok:~$ nmap -v 192.168.1.2 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:44 CST Initiating Ping Scan at 18:44 Scanning 192.168.1.2 [2 ports] Completed Ping Scan at 18:44, 0.00s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 18:44 Completed Parallel DNS resolution of 1 host. at 18:44, 0.03s elapsed Initiating Connect Scan at 18:44 Scanning 192.168.1.2 [1000 ports] Discovered open port 445/tcp on 192.168.1.2 Discovered open port 139/tcp on 192.168.1.2 Discovered open port 3306/tcp on 192.168.1.2 Discovered open port 80/tcp on 192.168.1.2 Discovered open port 111/tcp on 192.168.1.2 Discovered open port 53/tcp on 192.168.1.2 Discovered open port 5902/tcp on 192.168.1.2 Discovered open port 8090/tcp on 192.168.1.2 Discovered open port 6881/tcp on 192.168.1.2 Completed Connect Scan at 18:44, 0.02s elapsed (1000 total ports) Nmap scan report for 192.168.1.2 Host is up (0.0017s latency). Not shown: 991 closed ports PORT STATE SERVICE 53/tcp open domain 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 445/tcp open microsoft-ds 3306/tcp open mysql 5902/tcp open vnc-2 6881/tcp open bittorrent-tracker 8090/tcp open unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds Answer... Turns out all of the ports were open to the network. I installed open ssh and confirmed it. Then I edited my db conf to listen to external IP's and all was well.

    Read the article

  • Unable to SSH into EC2 instance on Fedora 17

    - by abhishek
    I did following steps But I am not able to SSH to it(Same steps work fine on Fedora 14 image). I am getting Permission denied (publickey,gssapi-keyex,gssapi-with-mic) I created new instance using fedora 17 amazon community image(ami-2ea50247). I copied my ssh keys under /home/usertest/.ssh/ after creating a usertest I have SELINUX=disabled here is Debug info: $ ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com OpenSSH_5.2p1, OpenSSL 1.0.0b-fips 16 Nov 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-243-101-41.compute-1.amazonaws.com [54.243.101.41] port 22. debug1: Connection established. debug1: identity file /home/usertest/.ssh/identity type -1 debug1: identity file /home/usertest/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/usertest/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/usertest/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 131/256 debug2: bits set: 506/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug1: Host 'ec2-54-243-101-41.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/usertest/.ssh/known_hosts:17 debug2: bits set: 500/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/usertest/.ssh/identity ((nil)) debug2: key: /home/usertest/.ssh/id_rsa ((nil)) debug2: key: /home/usertest/.ssh/id_dsa (0x7f904b5ae260) debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup gssapi-with-mic debug3: remaining preferred: publickey,keyboard-interactive,password debug3: authmethod_is_enabled gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug3: Trying to reverse map address 54.243.101.41. debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug2: we did not send a packet, disable method debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/usertest/.ssh/identity debug3: no such identity: /home/usertest/.ssh/identity debug1: Trying private key: /home/usertest/.ssh/id_rsa debug3: no such identity: /home/usertest/.ssh/id_rsa debug1: Offering public key: /home/usertest/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • Linux Mint 14 severe overheating

    - by agvares
    I've tried switching to Mint (it was Mint 12) about 6 months ago, but failed to do it due to enormous overheating which made it virtually impossible to run any applications more complex than Gedit. Time had passed, and now I'm back again with Mint 14, but feeling way more determined. What I face are the following issues: 1) Great overheating (and by great I mean that just by doing nothing my CPU temp is floating around 70-75 C, which I find a lot) 2) Running multiple applications (let's say Chrome, Skype and Pidgin) results in critical overheating and immediate shutdown of the system 3) Due to the stuff listed above, my battery drains in about 10-15 minutes, pretty much turning my laptop into a crippled desktop machine I've got a HP-dv6 laptop (i7, 6gb RAM, dual graphics) Here's the output of lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] 07:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 0d:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 13:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5209 PCI Express Card Reader (rev 01) 19:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) what i've tried to do already: 1) I've edited my grub file to add some "splash" arguments there 2) Installed jupiter and powertop 3) Tried to upgrade to newer kernels (up to 3.8), btw running anything newer than 3.5 results in both resolution and wi-fi detection fail 4) Read lots of forum threads devoted to the topic So, my question is simple: What else might I do to become finally able to use Mint as my default OS without the risk of being burned alive by the CPU heat?

    Read the article

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • SSH is not working .. Password promt is not coming

    - by Sumanth Lingappa
    I am not able to SSH into my ubuntu server since yesterday. I am not using any keyless or public key method.. Its simple SSH with username and password everytime.. However I can do a VNC session running on my ubuntu server.. But I am afraid that if the vnc session goes out, I wont be having any way to login to the server.. My ssh-vvv output is as below.. sumanth@sumanth:~$ ssh -vvv user@serverIP OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.16.2.156 [172.16.2.156] port 22. debug1: Connection established. debug1: identity file /home/sumanth/.ssh/id_rsa type -1 debug1: identity file /home/sumanth/.ssh/id_rsa-cert type -1 debug1: identity file /home/sumanth/.ssh/id_dsa type -1 debug1: identity file /home/sumanth/.ssh/id_dsa-cert type -1 debug1: identity file /home/sumanth/.ssh/id_ecdsa type -1 debug1: identity file /home/sumanth/.ssh/id_ecdsa-cert type -1 debug1: identity file /home/sumanth/.ssh/id_ed25519 type -1 debug1: identity file /home/sumanth/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH_5* compat 0x0c000000 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "172.16.2.156" from file "/home/sumanth/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/sumanth/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],[email protected],ssh-ed25519,ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,[email protected],[email protected],[email protected],aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-sha1,[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: setup hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA ea:4e:15:52:15:dd:6b:09:d4:36:cb:14:2d:c3:1b:7a debug3: load_hostkeys: loading entries for host "172.16.2.156" from file "/home/sumanth/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/sumanth/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug1: Host '172.16.2.156' is known and matches the ECDSA host key. debug1: Found key in /home/sumanth/.ssh/known_hosts:5 debug1: ssh_ecdsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/sumanth/.ssh/id_rsa ((nil)), debug2: key: /home/sumanth/.ssh/id_dsa ((nil)), debug2: key: /home/sumanth/.ssh/id_ecdsa ((nil)), debug2: key: /home/sumanth/.ssh/id_ed25519 ((nil)),

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • ubuntu ssh does not connect

    - by bocca
    SSH won't be able to establish a connection to our server Here's the output of ssh -vvv: ssh -v -v -v 11.11.11.11 OpenSSH_5.1p1 Debian-6ubuntu2, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 11.11.11.11 [11.11.11.11] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.1p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 133/256 debug2: bits set: 486/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /root/.ssh/known_hosts debug3: check_host_in_hostfile: match line 1 debug1: Host '11.11.11.11' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:1 debug2: bits set: 497/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/identity ((nil)) debug2: key: /root/.ssh/id_rsa ((nil)) debug2: key: /root/.ssh/id_dsa ((nil)) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug3: no such identity: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug3: no such identity: /root/.ssh/id_rsa debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 57 padlen 7 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug3: tty_make_modes: ospeed 38400 debug3: tty_make_modes: ispeed 38400 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env SSH_AGENT_PID debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env GTK_RC_FILES debug3: Ignored env WINDOWID debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env GNOME_KEYRING_SOCKET debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env USERNAME debug3: Ignored env SESSION_MANAGER debug3: Ignored env MAIL debug3: Ignored env PATH debug3: Ignored env DESKTOP_SESSION debug3: Ignored env PWD debug3: Ignored env GDM_KEYBOARD_LAYOUT debug3: Ignored env GNOME_KEYRING_PID debug1: Sending env LANG = en_CA.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env GDM_LANG debug3: Ignored env GDMSESSION debug3: Ignored env HISTCONTROL debug3: Ignored env SPEECHD_PORT debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env DISPLAY debug3: Ignored env LESSCLOSE debug3: Ignored env XAUTHORITY debug3: Ignored env COLORTERM debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_confirm: type 99 id 0 debug2: shell request accepted on channel 0

    Read the article

  • Amazon EC2 pem file stopped working suddenly

    - by Jashwant
    I was connecting to Amazon EC2 through SSH and it was working well. But all of a sudden, it stopped working. I am not able to connect anymore with the same key file. What can go wrong ? Here's the debug info. ssh -vvv -i ~/Downloads/mykey.pem [email protected] OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-222-60-78.eu.compute.amazonaws.com [54.229.60.78] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jashwant/Downloads/mykey.pem" as a RSA1 public key debug1: identity file /home/jashwant/Downloads/mykey.pem type -1 debug1: identity file /home/jashwant/Downloads/mykey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1p1 Debian-4 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA d8:05:8e:fe:37:2d:1e:2c:f1:27:c2:e7:90:7f:45:48 debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "54.229.60.78" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug1: Host 'ec2-54-222-60-78.eu.compute.amazonaws.com' is known and matches the ECDSA host key. debug1: Found key in /home/jashwant/.ssh/known_hosts:4 debug1: ssh_ecdsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: jashwant@jashwant-linux (0x7f827cbe4f00) debug2: key: /home/jashwant/Downloads/mykey.pem ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: jashwant@jashwant-linux debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/jashwant/Downloads/mykey.pem debug1: read PEM private key done: type RSA debug3: sign_and_send_pubkey: RSA 9b:7d:9f:2e:7a:ef:51:a2:4e:fb:0c:c0:e8:d4:66:12 debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). I've already googled everything and checked : Public DNS is same (It hasnt changed), Username is ubuntu as it's a Ubuntu AMI ( Used the same earlier), Permission is 400 on mykey.pem file ssh port is enabled via security groups ( Used the same ealier )

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • Cisco ASA 5505 - L2TP over IPsec

    - by xraminx
    I have followed this document on cisco site to set up the L2TP over IPsec connection. When I try to establish a VPN to ASA 5505 from my Windows XP, after I click on "connect" button, the "Connecting ...." dialog box appears and after a while I get this error message: Error 800: Unable to establish VPN connection. The VPN server may be unreachable, or security parameters may not be configured properly for this connection. ASA version 7.2(4) ASDM version 5.2(4) Windows XP SP3 Windows XP and ASA 5505 are on the same LAN for test purposes. Edit 1: There are two VLANs defined on the cisco device (the standard setup on cisco ASA5505). - port 0 is on VLAN2, outside; - and ports 1 to 7 on VLAN1, inside. I run a cable from my linksys home router (10.50.10.1) to the cisco ASA5505 router on port 0 (outside). Port 0 have IP 192.168.1.1 used internally by cisco and I have also assigned the external IP 10.50.10.206 to port 0 (outside). I run a cable from Windows XP to Cisco router on port 1 (inside). Port 1 is assigned an IP from Cisco router 192.168.1.2. The Windows XP is also connected to my linksys home router via wireless (10.50.10.141). Edit 2: When I try to establish vpn, the Cisco device real time Log viewer shows 7 entries like this: Severity:5 Date:Sep 15 2009 Time: 14:51:29 SyslogID: 713904 Destination IP = 10.50.10.141, Decription: No crypto map bound to interface... dropping pkt Edit 3: This is the setup on the router right now. Result of the command: "show run" : Saved : ASA Version 7.2(4) ! hostname ciscoasa domain-name default.domain.invalid enable password HGFHGFGHFHGHGFHGF encrypted passwd NMMNMNMNMNMNMN encrypted names name 192.168.1.200 WebServer1 name 10.50.10.206 external-ip-address ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address external-ip-address 255.0.0.0 ! interface Vlan3 no nameif security-level 50 no ip address ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name default.domain.invalid object-group service l2tp udp port-object eq 1701 access-list outside_access_in remark Allow incoming tcp/http access-list outside_access_in extended permit tcp any host WebServer1 eq www access-list outside_access_in extended permit udp any any eq 1701 access-list inside_nat0_outbound extended permit ip any 192.168.1.208 255.255.255.240 access-list inside_cryptomap_1 extended permit ip interface outside interface inside pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool PPTP-VPN 192.168.1.210-192.168.1.220 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-524.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www WebServer1 www netmask 255.255.255.255 access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute http server enable http 192.168.1.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec transform-set TRANS_ESP_3DES_MD5 esp-3des esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_MD5 mode transport crypto map outside_map 1 match address inside_cryptomap_1 crypto map outside_map 1 set transform-set TRANS_ESP_3DES_MD5 crypto map outside_map interface inside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd enable inside ! group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.1.1 vpn-tunnel-protocol IPSec l2tp-ipsec username myusername password FGHFGHFHGFHGFGFHF nt-encrypted tunnel-group DefaultRAGroup general-attributes address-pool PPTP-VPN default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key * tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! ! prompt hostname context Cryptochecksum:a9331e84064f27e6220a8667bf5076c1 : end

    Read the article

  • OS Missing? Messed up the MBR on Win7 64-bit

    - by hom3lesshom3boy
    I have a Windows 7 machine with two hard drives: a 1TB C: drive and 500GB J:. I had Windows XP installed on C: and Windows 7 installed on J:. I installed Windows 7 after Windows XP from an installer .exe I (legally) bought and downloaded. It, and all of my other files, are sitting on my J: drive intact. While under my Windows 7 install, a few days ago I decided to use Priform's CCleaner and use its DriveWipe utility to wipe the C: drive. 1% into the process, I cancelled and attempted to use it again. It gives me an error saying it can't format the drive, so I poke around the Internet a bit, give up, and restart my computer. I first get an "OS is missing" error after the computer boots past the BIOS. I downloaded and put UBCD on a bootable USB to use another drivewiping tool to completely erase the C: drive, hoping it'll take the problem with it. No luck. I try to use TestDisk to make my J: my primary active drive, but no luck. I still get the "OS is missing" error. Or sometimes it'll hang at Verifying DMI Pool. Or sometimes I'll get the "NTLDR is missing" error. I get hold of Hiren's and put it on another bootable USB. I first I tried the Boot Windows 7 from Hard Drive option, and I get "Error 15: File Not Found". I tried the "Fix 'NTLDR is Missing'" option (I'm not quite sure why this is even showing up, since I'm trying to get into a HDD with Windows 7 installed. Probably messed up somewhere when I used TestDisk) and I get this list: I'll run through the error messages I get: 1st Try - Windows could not start because the following file is missing or corrupt: \system32\hal.dll 2nd Try - Windows could not start because the following file is missing or corrupt: \system32\ntoskrnl.exe 3rd Try - Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware. 4th - 8th Try - Same as #3 9th Try - I/O Error accessing boot sector file multi(0)disk(0)fdisk(0)\BOOTSEC.DOS. And computer freezes. 10th Try - computer restarts Needless to say, not a single one of those works. I then tried to open up the Windows 7 exe I have sitting on my J: from the Mini-XP OS on Hiren's, but it won't run because I'm trying to run a 64-bit file from a 32-bit exe. At least, that's the problem according to these guys: http://social.technet.microsoft.com/...-b2f54e9c7d18/ I then borrowed a 64-bit Windows Home Premium CD from a friend to get to the recovery options. But I get the error message: This version of System Recovery Options is not compatible with the version of Windows you are trying to repair. Try using a recovery disc that is compatible with this version of Windows. I pressed Shift + F10 to get to the Command Prompt directly. These are the exact steps I took from there (paraphrased a little): X:\Sources>bootrec /Fixmbr The operation completed successfully. X:\Sources>bootrec /Fixboot The operation completed successfully. I restarted my computer, but it still didn't work. I unplugged the C: drive, then tried bootrec and Diskpart: X:\Sources> bootrec.exe X:\Sources> bootrec /RebuildBcd Total identified Windows installations: 1 [1] \\?\GLOBALROOT\Device\HarddiskVolume1\Windows Add installation to bootlist? Yes(Y)/No(N)/All(A):y The requested system device cannot be found. X:\Sources>DiskPart DISKPART> List Disk Disk # Status Size Free Dyn Gpt Disk 0_Online_465GB_0B_______* Disk 1 Online 1000MB 0B (this is Hiren's on a bootable usb) DISKPART> Select Disk 0 Disk 0 is now the selected disk. DISKPART> List Partition Partition # Type Size Offset Partition 1 System 465GB 31KB DISKPART> Select Partition 1 Partition 1 is now the selected partition DISKPART> Active The selected disk is not a fixed MBR disk. The ACTIVE command can only be used on fixed MBR disks. DISKPART> exit Leaving Diskpart... X:\Sources>bootrec /Fixmbr The operation completed successfully. X:\Sources>bootrec /Fixboot The operation completed successfully. Before I go any further, is there anything I'm overlooking/doing wrong? All I care about is making the J: and Windows 7 bootable again. SPECS: Windows 7 Professional 64-Bit GIGABYTE - Motherboard - Socket 775 - GA-P35-DS3R (rev. 2.1) Crucial Ballistix 2048MB PC6400 DDR2 800MHz (2x2GB) Intel Core 2 Duo E6700 Processor (2.6 (6GHZ) I think... not sure anymore C: HDD - SAMSUNG HD103UJ (1TB, not plugged in) J: HDD - WDC WD5000AKS-00V1A0 (500GB)

    Read the article

  • FFmpeg creates emtpy (black) frames

    - by resamsel
    I have a set of images from a timelapse shot (172 JPG files) that I want to convert into a movie. I tried several parameters with FFmpeg, but all I get is a video with black frames (though it has the expected length). ffmpeg -f image2 -vcodec mjpeg -y -i img_%03d.jpg timelapse2.mpg The command above creates this video: http://sdm-net.org/data/timelapse2.mpg What I'm expecting is something like this (created with Time Lapse Assembler.app): https://vimeo.com/39038362 - This is my fallback option, but I'd really like to create timelapse movies from a script. I'm on OSX Lion (10.7.3) with FFmpeg version (0.10) installed via Homebrew. I also tried to find a proper version of mencoder for OSX, but this doesn't seem to be an easy task. Also, ImageMagick's convert doesn't seem to work nicely, it creates really bad output and it seems there's not much I can do about it... Edit: With libx264 and an mp4 container: ffmpeg -f image2 -y -i img_%03d.jpg -vcodec libx264 timelapse4.mp4 Output: ffmpeg version 0.10 Copyright (c) 2000-2012 the FFmpeg developers built on Mar 26 2012 13:47:02 with clang 3.0 (tags/Apple/clang-211.12) configuration: --prefix=/usr/local/Cellar/ffmpeg/0.10 --enable-shared --enable-gpl --enable-version3 --enable-nonfree --enable-hardcoded-tables --enable-libfreetype --cc=/usr/bin/clang --enable-libx264 --enable-libfaac --enable-libmp3lame --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libass --disable-ffplay libavutil 51. 34.101 / 51. 34.101 libavcodec 53. 60.100 / 53. 60.100 libavformat 53. 31.100 / 53. 31.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 60.100 / 2. 60.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, image2, from 'img_%03d.jpg': Duration: 00:00:06.88, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [buffer @ 0x7f8ec9415f20] w:3888 h:2592 pixfmt:yuvj420p tb:1/1000000 sar:72/72 sws_param: [libx264 @ 0x7f8ec981d800] using SAR=1/1 [libx264 @ 0x7f8ec981d800] frame MB size (243x162) > level limit (36864) [libx264 @ 0x7f8ec981d800] MB rate (984150) > level limit (983040) [libx264 @ 0x7f8ec981d800] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 AVX [libx264 @ 0x7f8ec981d800] profile High, level 5.1 [libx264 @ 0x7f8ec981d800] 264 - core 120 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'timelapse4.mp4': Metadata: encoder : Lavf53.31.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuvj420p, 3888x2592 [SAR 72:72 DAR 3:2], q=-1--1, 25 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (mjpeg -> libx264) Press [q] to stop, [?] for help frame= 172 fps= 18 q=-1.0 Lsize= 259kB time=00:00:06.80 bitrate= 312.3kbits/s video:256kB audio:0kB global headers:0kB muxing overhead 1.089647% [libx264 @ 0x7f8ec981d800] frame I:1 Avg QP: 9.60 size:212820 [libx264 @ 0x7f8ec981d800] frame P:43 Avg QP:30.50 size: 291 [libx264 @ 0x7f8ec981d800] frame B:128 Avg QP:31.00 size: 285 [libx264 @ 0x7f8ec981d800] consecutive B-frames: 0.6% 0.0% 1.7% 97.7% [libx264 @ 0x7f8ec981d800] mb I I16..4: 22.5% 77.2% 0.3% [libx264 @ 0x7f8ec981d800] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:100.0% [libx264 @ 0x7f8ec981d800] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 0.0% 0.0% 0.0% direct: 0.0% skip:100.0% L0: 1.2% L1:98.8% BI: 0.0% [libx264 @ 0x7f8ec981d800] 8x8 transform intra:77.2% inter:100.0% [libx264 @ 0x7f8ec981d800] coded y,uvDC,uvAC intra: 41.2% 23.4% 0.6% inter: 0.0% 0.0% 0.0% [libx264 @ 0x7f8ec981d800] i16 v,h,dc,p: 40% 25% 35% 1% [libx264 @ 0x7f8ec981d800] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 36% 32% 30% 1% 0% 0% 0% 0% 0% [libx264 @ 0x7f8ec981d800] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 51% 40% 6% 1% 1% 0% 1% 0% 1% [libx264 @ 0x7f8ec981d800] i8c dc,h,v,p: 60% 21% 19% 0% [libx264 @ 0x7f8ec981d800] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x7f8ec981d800] ref P L0: 92.3% 0.0% 0.0% 7.7% [libx264 @ 0x7f8ec981d800] ref B L0: 50.0% 0.0% 50.0% [libx264 @ 0x7f8ec981d800] ref B L1: 99.4% 0.6% [libx264 @ 0x7f8ec981d800] kb/s:304.49 Output timelapse4.mp4 (beacause of spam protection I can only post two links with my reputation): http sdm-net.org/data/timelapse4.mp4

    Read the article

  • ldirectord ipvsadm not show reals ip and not work wtih pacemaker and corosync

    - by miguer27
    first thanks for your time. I'm having a problem with ldirectord that I can not solve, I comment my situation: I have two nodes with pace maker and corosync and configure somes resources: root@ldap1:/home/mamartin# crm status Last updated: Tue Jun 3 12:58:30 2014 Last change: Tue Jun 3 12:23:47 2014 via cibadmin on ldap1 Stack: openais Current DC: ldap2 - partition with quorum Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff 2 Nodes configured, 2 expected votes 7 Resources configured. Online: [ ldap1 ldap2 ] Resource Group: IPV_LVS IPV_4 (ocf::heartbeat:IPaddr2): Started ldap1 IPV_6 (ocf::heartbeat:IPv6addr): Started ldap1 lvs (ocf::heartbeat:ldirectord): Started ldap1 Clone Set: clon_IPV_lo [IPV_lo] Started: [ ldap2 ] Stopped: [ IPV_lo:1 ] root@ldap1:/home/mamartin# crm configure show node ldap2 \ attributes standby="off" node ldap1 \ attributes standby="off" primitive IPV-lo_4 ocf:heartbeat:IPaddr \ params ip="192.168.1.10" cidr_netmask="32" nic="lo" \ op monitor interval="5s" primitive IPV-lo_6 ocf:heartbeat:IPv6addrLO \ params ipv6addr="[fc00:1::3]" cidr_netmask="64" \ op monitor interval="5s" primitive IPV_4 ocf:heartbeat:IPaddr2 \ params ip="192.168.1.10" nic="eth0" cidr_netmask="25" lvs_support="true" \ op monitor interval="5s" primitive IPV_6 ocf:heartbeat:IPv6addr \ params ipv6addr="[fc00:1::3]" nic="eth0" cidr_netmask="64" \ op monitor interval="5s" primitive lvs ocf:heartbeat:ldirectord \ params configfile="/etc/ldirectord.cf" \ op monitor interval="20" timeout="10" \ meta target-role="Started" group IPV_LVS IPV_4 IPV_6 lvs group IPV_lo IPV-lo_6 IPV-lo_4 clone clon_IPV_lo IPV_lo \ meta interleave="true" target-role="Started" location cli-prefer-IPV_LVS IPV_LVS \ rule $id="cli-prefer-rule-IPV_LVS" inf: #uname eq ldap1 colocation LVS_no_IPV_lo -inf: clon_IPV_lo IPV_LVS property $id="cib-bootstrap-options" \ dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1401264327" rsc_defaults $id="rsc-options" \ resource-stickiness="1000" The problem is in the ipvsadm only show a one real IP, when i configured two now, show the ldirector.cf: root@ldap1:/home/mamartin# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags - RemoteAddress:Port Forward Weight ActiveConn InActConn TCP ldap-maqueta.cica.es:ldap wrr - ldap2.cica.es:ldap Route 4 0 0 TCP [[fc00:1::3]]:ldap wrr - [[fc00:1::2]]:ldap Route 4 0 0 root@ldap1:/home/mamartin# cat /etc/ldirectord.cf checktimeout=10 checkinterval=2 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=yes #ipv4 virtual=192.168.1.10:389 real=192.168.1.11:389 gate 4 real=192.168.1.12:389 gate 4 scheduler=wrr protocol=tcp checktype=on #ipv6 virtual6=[[fc00:1::3]]:389 real6=[[fc00:1::1]]:389 gate 4 real6=[[fc00:1::2]]:389 gate 4 scheduler=wrr protocol=tcp checkport=389 checktype=on and in the logs I see nothing clear: root@ldap1:/home/mamartin# ldirectord -d /etc/ldirectord.cf start DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.11:389 -g -w 0) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.11:389 -g -w 0) DEBUG2: Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) DEBUG2: Disabled real server=on:tcp:192.168.1.11:389:::4:gate:\/: (virtual=tcp:192.168.1.10:389) DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 0) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 0) DEBUG2: Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) DEBUG2: Disabled real server=on:tcp:192.168.1.12:389:::4:gate:\/: (virtual=tcp:192.168.1.10:389) DEBUG2: Checking on: Real servers are added without any checks DEBUG2: Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) DEBUG2: Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) Running system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) Destination already exists root@ldap1:/home/mamartin# cat /var/log/ldirectord.log [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Quiescent real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 0) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Quiescent real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 0) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: 192.168.1.12:389 (tcp:192.168.1.10:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] system(/sbin/ipvsadm -a -t 192.168.1.10:389 -r 192.168.1.12:389 -g -w 4) failed: [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Added real server: 192.168.1.12:389 (192.168.1.10:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: 192.168.1.11:389 (tcp:192.168.1.10:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Restored real server: 192.168.1.11:389 (192.168.1.10:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: [[fc00:1::2]]:389 (tcp:[[fc00:1::3]]:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] system(/sbin/ipvsadm -a -t [[fc00:1::3]]:389 -r [[fc00:1::2]]:389 -g -w 4) failed: [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Added real server: [[fc00:1::2]]:389 ([[fc00:1::3]]:389) (Weight set to 4) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Resetting soft failure count: [[fc00:1::1]]:389 (tcp:[[fc00:1::3]]:389) [Tue Jun 3 09:39:29 2014|ldirectord.cf|19266] Restored real server: [[fc00:1::1]]:389 ([[fc00:1::3]]:389) (Weight set to 4) do not know if this is a bug or a configuration error, can anyone help? Regards.

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • how to setup kismet.conf on Ubuntu

    - by Registered User
    I installed Kismet on my Ubuntu 10.04 machine as apt-get install kismet every thing seems to work fine. but when I launch it I see following error kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. I followed this guide http://www.ubuntugeek.com/kismet-an-802-11-wireless-network-detector-sniffer-and-intrusion-detection-system.html#more-1776 how ever in kismet.conf I am not clear with following line source=none,none,addme as to what should I change this to. lspci -vnn shows 0c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g [14e4:4315] (rev 01) Subsystem: Dell Device [1028:000c] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f69fc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, ssb and iwconfig shows lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"WIKUCD" Mode:Managed Frequency:2.462 GHz Access Point: <00:43:92:21:H5:09> Bit Rate=11 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Managementmode:All packets received Link Quality=1/5 Signal level=-81 dBm Noise level=-90 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:169 Invalid misc:0 Missed beacon:0 So what should I be putting in place of source=none,none,addme with output I mentioned above ?

    Read the article

  • OpenVPN IPv6 over IPv4 tunnel

    - by user66779
    Today I installed OpenVPN 2.3rc2 on both my windows 7 client machine and centos 6 server. This new version of OpenVPN provides full compatibility for IPv6. The Problem: I am currently able to connect to the server (through the IPv4 tunnel) and ping the IPv6 address which is assigned to my client and I can also ping the tun0 interface on the server. However, I cannot browse to any IPv6 websites. My vps provider has given me this: 2607:f840:0044:0022:0000:0000:0000:0000/64 is routed to this server (2607:f840:0:3f:0:0:0:eda). This is ifconfig after setup with OpenVPN running: eth0 Link encap:Ethernet HWaddr 00:16:3E:12:77:54 inet addr:208.111.39.160 Bcast:208.111.39.255 Mask:255.255.255.0 inet6 addr: 2607:f740:0:3f::eda/64 Scope:Global inet6 addr: fe80::216:3eff:fe12:7754/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2317253 errors:0 dropped:7263 overruns:0 frame:0 TX packets:1977414 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1696120096 (1.5 GiB) TX bytes:1735352992 (1.6 GiB) Interrupt:29 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 inet6 addr: 2607:f740:44:22::1/64 Scope:Global UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:739567 errors:0 dropped:0 overruns:0 frame:0 TX packets:1218240 errors:0 dropped:1542 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:46512557 (44.3 MiB) TX bytes:1559930874 (1.4 GiB) So OpenVPN is sucessfully creating a tun0 interface and assigning clients IPv6 addresses using 2607:f840:44:22::/64. The first client to connect is getting 2607:f840:44:22::1000 and the second 2607:f840:44:22::1001, and so on... plus 1 each time. After connecting as the first client, I can ping from my windows client machine 2607:f740:44:22::1 and 2607:f740:44:22::1000. However, I have no access to IPv6 websites. I believe the problem is that the tun0 IPv6 addressees are not being forwarded to the eth0 interface. This is the firewall running on the server: #!/bin/sh # # iptables configuration script # # Flush all current rules from iptables # iptables -F iptables -t nat -F # # Allow SSH connections on tcp port 22 # iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -j ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # Accept connections on 1195 for vpn access from client # iptables -A INPUT -i eth0 -p udp --dport 1195 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 1195 -m state --state ESTABLISHED -j ACCEPT # # Apply forwarding for OpenVPN Tunneling # iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 209.111.39.160 iptables -A FORWARD -j REJECT # # Enable forwarding # echo 1 > /proc/sys/net/ipv4/ip_forward # # Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # # IPv6 # IP6TABLES=/sbin/ip6tables $IP6TABLES -F INPUT $IP6TABLES -F FORWARD $IP6TABLES -F OUTPUT echo -n "1" >/proc/sys/net/ipv6/conf/all/forwarding echo -n "1" >/proc/sys/net/ipv6/conf/all/proxy_ndp echo -n "0" >/proc/sys/net/ipv6/conf/all/autoconf echo -n "0" >/proc/sys/net/ipv6/conf/all/accept_ra $IP6TABLES -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p icmpv6 -j ACCEPT $IP6TABLES -P INPUT ACCEPT $IP6TABLES -P FORWARD ACCEPT $IP6TABLES -P OUTPUT ACCEPT Server.conf: server-ipv6 2607:f840:44:22::/64 server 10.8.0.0 255.255.255.0 port 1195 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh2048.pem ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" keepalive 10 60 tls-auth ta.key 0 cipher AES-256-CBC comp-lzo user nobody group nobody persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 5 Client.conf: client dev tun nobind keepalive 10 60 hand-window 15 remote 209.111.39.160 1195 udp persist-key persist-tun ca ca.crt key client1.key cert client1.crt remote-cert-tls server tls-auth ta.key 1 comp-lzo verb 3 cipher AES-256-CBC I'm not sure where I am going wrong, it could be the firewall, or something missing from server or client.conf. This version of OpenVPN was only released yesterday, and there's little info on the internet about how to setup an IPv6 over IPv4 vpn tunnel. I've read the manual for this new version of OpenVPN (parts pertaining to IPv6) and it provides very little info too. Thanks for any help.

    Read the article

  • Can I use the same machine as a client and server for SSH?

    - by achraf
    For development tests, I need to setup an SFTP server. So I want to know if it's possible to use the same machine as the client and the server. I tried and I keep getting this error: > Permission denied (publickey). > Connection closed and by running ssh -v agharroud@localhost i get : > OpenSSH_3.8.1p1,OpenSSL 0.9.7d 17 Mar > debug1: Reading configuration data /etc/ssh_config > debug1: Connecting to localhost [127.0.0.1] port 22. > debug1: Connection established. > debug1: identity file /home/agharroud/.ssh/identity type -1 > debug1: identity file /home/agharroud/.ssh/id_rsa type 1 > debug1: identity file /home/agharroud/.ssh/id_dsa type -1 > debug1: Remote protocol version 2.0, remote software version OpenSSH_3.8.1p1 > debug1: match: OpenSSH_3.8.1p1 pat OpenSSH* > debug1: Enabling compatibility mode for protocol 2.0 > debug1: Local version string SSH-2.0-OpenSSH_3.8.1p1 > debug1: SSH2_MSG_KEXINIT sent > debug1: SSH2_MSG_KEXINIT received > debug1: kex:server->client aes128-cbc hmac-md5 none > debug1: kex: client->server aes128-cbc hmac-md5 none > debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent > debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP > debug1: SSH2_MSG_KEX_DH_GEX_INIT sent > debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY > debug1: Host 'localhost' is known and matches the RSA host key. > debug1: Found key in /home/agharroud/.ssh/known_hosts:1 > debug1: ssh_rsa_verify: signature correct > debug1: SSH2_MSG_NEWKEYS sent > debug1: expecting SSH2_MSG_NEWKEYS > debug1: SSH2_MSG_NEWKEYS received > debug1: SSH2_MSG_SERVICE_REQUEST sent > debug1: SSH2_MSG_SERVICE_ACCEPT > received > > ****USAGE WARNING**** > > This is a private computer system. This computer system, including all > related equipment, networks, and network devices (specifically > including Internet access) are provided only for authorized use. This > computer system may be monitored for all lawful purposes, including to > ensure that its use is authorized, for management of the system, to > facilitate protection against unauthorized access, and to verify > security procedures, survivability, and operational security. Monitoring > includes active attacks by authorized entities to test or verify the > security of this system. During monitoring, information may be > examined, recorded, copied and used for authorized purposes. All > information, including personal information, placed or sent over this > system may be monitored. > > Use of this computer system, authorized or unauthorized, > constitutes consent to monitoring of this system. Unauthorized use may > subject you to criminal prosecution. Evidence of unauthorized use collected > during monitoring may be used for administrative, criminal, or other > adverse action. Use of this system constitutes consent to monitoring for > these purposes. > > debug1: Authentications that can continue: publickey > debug1: Next authentication method: publickey > debug1: Trying private key:/home/agharroud/.ssh/identity > debug1: Offering public key:/home/agharroud/.ssh/id_rsa > debug1:Authentications that can continue:publickey > debug1: Trying private key:/home/agharroud/.ssh/id_dsa > debug1: No more authentication methods to try. > Permission denied (publickey). Any ideas about the problem ? thanks !

    Read the article

  • WiX 3 Tutorial: Custom EULA License and MSI localization

    - by Mladen Prajdic
    In this part of the ongoing Wix tutorial series we’ll take a look at how to localize your MSI into different languages. We’re still the mighty SuperForm: Program that takes care of all your label color needs. :) Localizing the MSI With WiX 3.0 localizing an MSI is pretty much a simple and straightforward process. First let look at the WiX project Properties->Build. There you can see "Cultures to build" textbox. Put specific cultures to build into the testbox or leave it empty to build all of them. Cultures have to be in correct culture format like en-US, en-GB or de-DE. Next we have to tell WiX which cultures we actually have in our project. Take a look at the first post in the series about Solution/Project structure and look at the Lang directory in the project structure picture. There we have de-de and en-us subfolders each with its own localized stuff. In the subfolders pay attention to the WXL files Loc_de-de.wxl and Loc_en-us.wxl. Each one has a <String Id="LANG"> under the WixLocalization root node. By including the string with id LANG we tell WiX we want that culture built. For English we have <String Id="LANG">1033</String>, for German <String Id="LANG">1031</String> in Loc_de-de.wxl and for French we’d have to create another file Loc_fr-FR.wxl and put <String Id="LANG">1036</String>. WXL files are localization files. Any string we want to localize we have to put in there. To reference it we use loc keyword like this: !(loc.IdOfTheVariable) => !(loc.MustCloseSuperForm) This is our Loc_en-us.wxl. Note that German wxl has an identical structure but values are in German. <?xml version="1.0" encoding="utf-8"?><WixLocalization Culture="en-us" xmlns="http://schemas.microsoft.com/wix/2006/localization" Codepage="1252"> <String Id="LANG">1033</String> <String Id="ProductName">SuperForm</String> <String Id="LicenseRtf" Overridable="yes">\Lang\en-us\EULA_en-us.rtf</String> <String Id="ManufacturerName">My Company Name</String> <String Id="AppNotSupported">This application is is not supported on your current OS. Minimal OS supported is Windows XP SP2</String> <String Id="DotNetFrameworkNeeded">.NET Framework 3.5 is required. Please install the .NET Framework then run this installer again.</String> <String Id="MustCloseSuperForm">Must close SuperForm!</String> <String Id="SuperFormNewerVersionInstalled">A newer version of !(loc.ProductName) is already installed.</String> <String Id="ProductKeyCheckDialog_Title">!(loc.ProductName) setup</String> <String Id="ProductKeyCheckDialogControls_Title">!(loc.ProductName) Product check</String> <String Id="ProductKeyCheckDialogControls_Description">Plese Enter following information to perform the licence check.</String> <String Id="ProductKeyCheckDialogControls_FullName">Full Name:</String> <String Id="ProductKeyCheckDialogControls_Organization">Organization:</String> <String Id="ProductKeyCheckDialogControls_ProductKey">Product Key:</String> <String Id="ProductKeyCheckDialogControls_InvalidProductKey">The product key you entered is invalid. Please call user support.</String> </WixLocalization>   As you can see from the file we can use localization variables in other variables like we do for SuperFormNewerVersionInstalled string. ProductKeyCheckDialog* strings are to localize a custom dialog for Product key check which we’ll look at in the next post. Built in dialog text localization Under the de-de folder there’s also the WixUI_de-de.wxl file. This files contains German translations of all texts that are in WiX built in dialogs. It can be downloaded from WiX 3.0.5419.0 Source Forge site. Download the wix3-sources.zip and go to \src\ext\UIExtension\wixlib. There you’ll find already translated all WiX texts in 12 Languages. Localizing the custom EULA license Here it gets ugly. We can override the default EULA license easily by overriding WixUILicenseRtf WiX variable like this: <WixVariable Id="WixUILicenseRtf" Value="License.rtf" /> where License.rtf is the name of your custom EULA license file. The downside of this method is that you can only have one license file which means no localization for it. That’s why we need to make a workaround. License is checked on a dialog name LicenseAgreementDialog. What we have to do is overwrite that dialog and insert the functionality for localization. This is a code for LicenseAgreementDialogOverwritten.wxs, an overwritten LicenseAgreementDialog that supports localization. LicenseAcceptedOverwritten replaces the LicenseAccepted built in variable. <?xml version="1.0" encoding="UTF-8" ?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <UI> <Dialog Id="LicenseAgreementDialogOverwritten" Width="370" Height="270" Title="!(loc.LicenseAgreementDlg_Title)"> <Control Id="LicenseAcceptedOverwrittenCheckBox" Type="CheckBox" X="20" Y="207" Width="330" Height="18" CheckBoxValue="1" Property="LicenseAcceptedOverwritten" Text="!(loc.LicenseAgreementDlgLicenseAcceptedCheckBox)" /> <Control Id="Back" Type="PushButton" X="180" Y="243" Width="56" Height="17" Text="!(loc.WixUIBack)" /> <Control Id="Next" Type="PushButton" X="236" Y="243" Width="56" Height="17" Default="yes" Text="!(loc.WixUINext)"> <Publish Event="SpawnWaitDialog" Value="WaitForCostingDlg">CostingComplete = 1</Publish> <Condition Action="disable"> <![CDATA[ LicenseAcceptedOverwritten <> "1" ]]> </Condition> <Condition Action="enable">LicenseAcceptedOverwritten = "1"</Condition> </Control> <Control Id="Cancel" Type="PushButton" X="304" Y="243" Width="56" Height="17" Cancel="yes" Text="!(loc.WixUICancel)"> <Publish Event="SpawnDialog" Value="CancelDlg">1</Publish> </Control> <Control Id="BannerBitmap" Type="Bitmap" X="0" Y="0" Width="370" Height="44" TabSkip="no" Text="!(loc.LicenseAgreementDlgBannerBitmap)" /> <Control Id="LicenseText" Type="ScrollableText" X="20" Y="60" Width="330" Height="140" Sunken="yes" TabSkip="no"> <!-- This is original line --> <!--<Text SourceFile="!(wix.WixUILicenseRtf=$(var.LicenseRtf))" />--> <!-- To enable EULA localization we change it to this --> <Text SourceFile="$(var.ProjectDir)\!(loc.LicenseRtf)" /> <!-- In each of localization files (wxl) put line like this: <String Id="LicenseRtf" Overridable="yes">\Lang\en-us\EULA_en-us.rtf</String>--> </Control> <Control Id="Print" Type="PushButton" X="112" Y="243" Width="56" Height="17" Text="!(loc.WixUIPrint)"> <Publish Event="DoAction" Value="WixUIPrintEula">1</Publish> </Control> <Control Id="BannerLine" Type="Line" X="0" Y="44" Width="370" Height="0" /> <Control Id="BottomLine" Type="Line" X="0" Y="234" Width="370" Height="0" /> <Control Id="Description" Type="Text" X="25" Y="23" Width="340" Height="15" Transparent="yes" NoPrefix="yes" Text="!(loc.LicenseAgreementDlgDescription)" /> <Control Id="Title" Type="Text" X="15" Y="6" Width="200" Height="15" Transparent="yes" NoPrefix="yes" Text="!(loc.LicenseAgreementDlgTitle)" /> </Dialog> </UI> </Fragment></Wix>   Look at the Control with Id "LicenseText” and read the comments. We’ve changed the original license text source to "$(var.ProjectDir)\!(loc.LicenseRtf)". var.ProjectDir is the directory of the project file. The !(loc.LicenseRtf) is where the magic happens. Scroll up and take a look at the wxl localization file example. We have the LicenseRtf declared there and it’s been made overridable so developers can change it if they want. The value of the LicenseRtf is the path to our localized EULA relative to the WiX project directory. With little hacking we’ve achieved a fully localizable installer package.   The final step is to insert the extended LicenseAgreementDialogOverwritten license dialog into the installer GUI chain. This is how it’s done under the <UI> node of course.   <UI> <!-- code to be discussed in later posts –> <!-- BEGIN UI LOGIC FOR CLEAN INSTALLER --> <Publish Dialog="WelcomeDlg" Control="Next" Event="NewDialog" Value="LicenseAgreementDialogOverwritten">1</Publish> <Publish Dialog="LicenseAgreementDialogOverwritten" Control="Back" Event="NewDialog" Value="WelcomeDlg">1</Publish> <Publish Dialog="LicenseAgreementDialogOverwritten" Control="Next" Event="NewDialog" Value="ProductKeyCheckDialog">LicenseAcceptedOverwritten = "1" AND NOT OLDER_VERSION_FOUND</Publish> <Publish Dialog="InstallDirDlg" Control="Back" Event="NewDialog" Value="ProductKeyCheckDialog">1</Publish> <!-- END UI LOGIC FOR CLEAN INSTALLER –> <!-- code to be discussed in later posts --></UI> For a thing that should be simple for the end developer to do, localization can be a bit advanced for the novice WiXer. Hope this post makes the journey easier and that next versions of WiX improve this process. WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe  WiX 3 Tutorial: Custom EULA License and MSI localization WiX 3 Tutorial: Product Key Check custom action WiX 3 Tutorial: Building an updater WiX 3 Tutorial: Icons and installer pictures WiX 3 Tutorial: Creating a Bootstrapper

    Read the article

  • Build-Essentials installation failing

    - by Brickman
    I am having trouble accessing the several critical header files that show to be a part of the build process. The "Ubuntu Software Center" shows "Build Essentials" as installed: Next I did the following two commands, which did not improve the problem: ~$ sudo apt-get install build-essential [sudo] password for: Reading package lists... Done Building dependency tree Reading state information... Done build-essential is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. :~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. :~$ Dump of headers after installation attempts. > /usr/include/boost/interprocess/detail/atomic.hpp > /usr/include/boost/interprocess/smart_ptr/detail/sp_counted_base_atomic.hpp > /usr/include/qt4/Qt/qatomic.h /usr/include/qt4/Qt/qbasicatomic.h > /usr/include/qt4/QtCore/qatomic.h > /usr/include/qt4/QtCore/qbasicatomic.h > /usr/share/doc/git-annex/html/bugs/git_annex_unlock_is_not_atomic.html > /usr/src/linux-headers-3.11.0-15/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-15/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-15-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-17/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-17-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-18/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-18-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-19/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-19-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-20/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-20-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-22/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-22-generic/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.14.4-031404/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404-generic/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404-lowlatency/include/linux/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/alpha/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arm/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arm64/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/avr32/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/blackfin/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/frv/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/h8300/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/hexagon/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/ia64/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/m32r/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/m68k/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/metag/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/microblaze/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/mips/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/mn10300/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/parisc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/powerpc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/s390/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/score/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/sh/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/sparc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/tile/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/x86/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/xtensa/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/linux/atomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng/lib/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng/wrapper/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng-modules/lib/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng-modules/wrapper/ringbuffer/vatomic.h Yes, I know there are multiple headers of the same type here, but they are different versions. Version "linux-headers-3.14.4-031404" shows to be the latest. Ubuntu shows "Nothing needed to be installed." However, the following C/C++ headers files show to be missing for Eclipse and QT4. #include <linux/version.h> #include <linux/module.h> #include <linux/socket.h> #include <linux/miscdevice.h> #include <linux/list.h> #include <linux/vmalloc.h> #include <linux/slab.h> #include <linux/init.h> #include <asm/uaccess.h> #include <asm/atomic.h> #include <linux/delay.h> #include <linux/usb.h> This problem appears on my 32-bit version of Ubuntu and on both of my 64-bit versions. What I am doing wrong?

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >