Search Results

Search found 3148 results on 126 pages for 'amazon s3'.

Page 23/126 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Troubleshooting Amazon EC2 reboot

    - by tgm
    We've had a server (CentOS) running in EC2 for a few months. It had been going pretty smoothly until today when we got an alarm that the server was unavailable (HTTP service couldn't be reached). So I tried SSHing into the box but that timed out as well. I logged into the EC2 console and it said the instance was running and there wasn't anything in the system log. One odd thing I noticed is that even though we have an Elastic IP attached to it (which shows in the Elastic IP management area), the instance detail is not showing that there is an EIP associated with the instance. I looked through the message log and the last thing I see around the time we got our alert was the dhclient renewed the lease. I'm guessing there may have been some sort of issue with the networking. How might I check if that was the problem, or if there were any other issues that may have caused our instance to stop responding?

    Read the article

  • "Recursive" Wildcard DNS for Amazon Route 53

    - by Brian
    The title of the question might be misleading because I'm not an expert and I'm trying to learn the proper terminology. That being said, I'm wondering if it's possible to setup a wildcard DNS record for any number of dot separation in domain names and have them all point to the network root. This is for WordPress multisite, users will have the option of choosing a mapped domain name, I want to configure my DNS so that both mysite.co.uk.network.com and mysite.com.network.com are valid (I realize this is a but ugly, but WordPress multisite requires that each site have a unique site_url and I'd prefer to preserve the period-delimited appearance if it's possible).

    Read the article

  • Cannot push to GitHub from Amazon EC2 Linux instance

    - by Eli
    Having the worst luck push files to a repo from EC2 to GitHub. I have my ssh key setup and added to Github. Here are the results of ssh -v [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0g-fips 18 Jan 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type 1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2 debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'github.com' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Offering public key: /root/.ssh/id_rsa debug1: Remote: Forced command: gerve eliperelman 81:5f:8a:b2:42:6d:4e:8c:2d:ba:9a:8a:2b:9e:1a:90 debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • How to configuration keepalived on Amazon EC2?

    - by oeegee
    I rad some article. Keepalived over GRE tunnel for failover on VPS environment http://blog.killtheradio.net/how-tos/keepalived-haproxy-and-failover-on-the-cloud-or-any-vps-without-multicast/ but, I don't know how to configuration? and How to call this architecture? only I Know that How to config Master/Backup configuration at keepalived. What I want to know that How does work keepalived? I want to design this.... XMPP Server(EC2) | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy #1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 Thanks! but What I want to know how to work on keepalived with unicast patche modul. ELB is expansive. and this is first totaly design. [Flow] ELB -- XMPP Server -- ELB -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ELB | Casandra#1 Casandra#2 Casandra#3 Casandra#4 and change first design. [Flow] ELB -- XMPP Server -- HAProxy Master(Casandra Farm) -- Casandra ELB | XMPP#1 XMPP#2 XMPP#3 XMPP#4 | ------------------------------------------------- keepalived Master(EC2) - keepalived Backup(EC2) HAProxy#1 HAProxy#2 ------------------------------------------------- | Casandra#1 Casandra#2 Casandra#3 Casandra#4 this is second. [Flow] ELB -- HAProxy(XMPP Farm) -- XMPP Server -- HAProxy(Casandra Farm) -- Casanda It's OK? ELB | HAProxy#1 HAProxy#2 HAProxy#3 HAProxy#4 XMPP#1 XMPP#2 XMPP#3 XMPP#4 | Casandra#1 Casandra#2 Casandra#3 Casandra#4

    Read the article

  • Amazon AWS Ec2 instance, Elastic IP, Domain name from external domainseller, and Google Apps for Email

    - by Sid
    We are hosting our site on an Ec2 instance. Our Elastic IP is w.x.y.z and Public DNS is: ec2-w-x-y-z.compute-1.amazonaws.com. We've bought a domain name domainname.com from a lesser known domain-name-seller. We added an A-record pointing domainname.com to w.x.y.z. Will this work or do we need a CNAME record to point to the same too? We wanted to use Google apps for emailing so adjusted the TXT/MX records according to the Google Apps instructions to be able to send/recv email using @domainname.com email addresses. Have we got it right, more important, we came across queries relating to email sent from ec2-w-x-y-z.compute-1.amazonaws.com (our users can send email from their onsite accounts) going to spam (rDNS not pointing to domainname.com but to ec2-w-x-y-z.compute-1.amazonaws.com). How can we fix this? We came across SPF records, do they provide a complete solution? We aren't sure as to how to use them. Can you help pls? Thank you, Sid

    Read the article

  • Amazon EC2 SSH Failed to connect "Bad File Number"

    - by Mark McCook
    This is the command I am told to use by clicking connect in the control panel "ssh -i private_key.pem root@instancePublicDNS" Well that one failed so I wanted to know what happen so I ran "ssh -vvv private_key.pem root@instancePublicDNS" OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug2: ssh_connect: needpriv 0 debug1: Connecting to private_key.pem [...] port 22. debug1: connect to address ... port 22: Attempt to connect timed out without establishing a connection ssh: connect to host private_key.pem port 22: Bad file number Any Ideas? I have searched for the answer on google and serverfault, I found a few possible solutions that did not work. info about the instance AMI-ID : ami-688c7801 ( ubuntu 10.10 Server )

    Read the article

  • Amazon EC2 Reserved Instances: "Heavy Utilization" clarification

    - by gravyface
    Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".

    Read the article

  • Amazon EC2 High-Memory Extra Large Instance

    - by Simpanoz
    I am new to Mongodb and EC2. If I use following single MongoDb server : High-Memory Extra Large Instance 17.1 GiB memory, 6.5 ECU (2 virtual cores with 3.25 EC2 Compute Units each), 420 GB of local instance storage, 64-bit platform As a layman, if we quantify I/O, data in MB/sec. How much I/O transactions mongodb server can handle easily, without being burnt out. Consider default settings of EC2 server with Ubuntu and MongoDb version available in AWS marketplace. Thanks in advance

    Read the article

  • Moving MySQL directory on an Amazon EC2 machine

    - by Traveling Tech Guy
    I'm trying to have MySQL point to a directory on an EBS volume I mounted on my EC2 machine. I took th following steps: Stopped MySQL (/etc/init.d/mysqld stop) - successful Created a MySQL directory on my volume, mounted on /vol (mkdir /vol/mysql) Copied the contents of /var/lib/mysql to /vol/mysql (cp -R /var/lib/mysql /vol/mysql) Chanded the owner and group of that directory to match the original (chown -R mysql:mysql /vol/mysql) - after this step, the 2 directories are identical. Edited the /etc/my.cnf file (commented 2 original lines): [mysqld] #datadir=/var/lib/mysql #socket=/var/lib/mysql/mysql.sock datadir=/vol/mysql socket=/vol/mysql/mysql.sock` Started MySQL (/etc/init.d/mysqld start) - FAILED The error file /var/log/mysqld.log contains the following lines: 100205 20:52:54 mysqld started 100205 20:52:54 InnoDB: Started; log sequence number 0 43665 100205 20:52:54 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.45' socket: '/vol/mysql/mysql.sock' port: 3306 Source distribution No other errors are available. What am I doing wrong? Where can I find the error/s encountered by MySql? If I restore the original lines, MySQL starts, leading me to believe it may be a permissions issue - but permissions are the same for both directories? Thanks!

    Read the article

  • Configuring DNS on my Amazon AWS [closed]

    - by Ricardo
    So, I have an AWS EC2, and I need to configure a dns server, I have a Ubuntu 11.04 and webmin is configured, I have a domin point to my ip. So, know I need to redirect my domain to my ip and configure BIND dns server? What configuration I have to do to redirect my domain account to my ip and create my own dns server? I see some videos on youtube but, i don´t know what is the best of for me. Thank´s for any help.

    Read the article

  • DNS with Amazon EC2 Elastic Load Balancer

    - by user68173
    I'm told that I can't make the root of a domain (example.com) a CNAME - I have to specify an IP. Given that you can't use an IP address to point at your Elastic Load Balancer, what's the best thing to do? Currently I do this: example.com - A record to elastic IP of first server- redirects to www.example.com www.example.com CNAME to hostname of load balancer If the first server is out of action, the redirect will fail. Is there a better way to approach this?

    Read the article

  • Amazon EC2 performance vs desktop

    - by flashnik
    I'm wondering how to compare performance of EC2 instances with standard dedicated servers and desktop. I've found only comparance of defferent clouds. I need to find a solution to perform some computations which require CPU and memory (disc IO is not used). The choice is to use: EC2 (High-CPU) or Xeon 5620/5630 with DDR3 or Core i7-960/980 with DDR3 Can anybody help, how to compare their performance? I'm not speaking about reliability of alternatives, I want to understand pros and cons from the point of just performance.

    Read the article

  • Zabbix on amazon ec2. Installation

    - by 330xi
    I've read about mikoomi. And I it is not suitable for me. I do not have access to secret key. I want to install and use zabbix server and agent itself on instances. Is it good idea? And here is the problem: during: yum install zabbix-server-pgsql zabbix-web-pgsql zabbix-agent I got: Error: httpd24-tools conflicts with httpd-tools Error: httpd24 conflicts with httpd How to complete installation successfully?

    Read the article

  • How Amazon ELB Health check Works?

    - by diegodias
    I am having problems configuring ELB for my servers. I start 2 micro instances with the exact same conf and try to do Load Balancing. However they never pass the health check (HTTP port 80 path:"/"). Ping is ok on the website. So is telnet on 80. How did the health check works? Am I doing anything really wrong? EDIT: Both Direct browser access and GET (via curl) works correctly (status 200)

    Read the article

  • MySQL Server hitting 100% unexpectedly (Amazon AWS RDS)

    - by Luc
    Please help! We've been struggling with this one for months. This week we upped our RDS instance to the highest performing instance and although the occurrences have reduced, we're still having our DB all of a sudden hit 100%. It comes out of nowhere. Sometimes 2am, sometimes midday. I've ruled out a DOS - our pages access logs have normal traffic I've ruled out memcached suddenly dieing (hits and misses continue as normal). The SHOW PROCESSLIST while we have issues reports about 500 queries in queue. If I kill them off or restart the server, they just keep coming back and then eventually out of knowhere, our server resumes back to normal. Sometimes up to 3 hours. Our bad performing queries take .02 seconds to execute when the server eventually returns back to normal but while we're in this 100% CPU physco phase, those queries never finish executing. Please help!!!!! Anybody know anything about MYSQL query optimization? Could it be the server deciding to use different indexes all of a sudden, which puts it into a spiral?

    Read the article

  • Connecting jconsole using SOCKS to Amazon EC2

    - by freshfunk
    I'm trying to use jconsole to view stats on an EC2 instance by using a socks proxy created by SSH. I've tried the various scripts mentioned in the links below but to no avail: http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html http://gabrielcain.com/blog/2010/11/02/using-ssh-proxying-to-connect-jconsole-to-remote-cassandra-instances/ I'm running ssh -f -ND 8123 myuser@mymachine and verified that at least Firefox goes through it as a proxy. I then run jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=8123 service:jmx:rmi:///jndi/rmi://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8080/jmxrmi I run netstat -n on my EC2 instance and I see a connection created by my machine. However, the connection eventually disappears and I get a 'channel 2: open failed: connect failed: Operation timed out' from my ssh tunnel. I've opened the jmx port through the security group and I've checked the port on the EC2 instance to make sure it's open (by telnet-ing to it). I'm not sure where to look next. Are there some properties in sshd_config or ssh_config I need to enable for tunneling? Or anything in Mac OS X? I feel like a serious noob but sys administration is really not my strong point. I've spent several hours and can't get this to work.

    Read the article

  • how to solve out of memory error in java in amazon ec2 server

    - by sathishkumar
    can anyone explain about this error message? we are using IBM jre to run java application Its occupying more space on the server. JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait. JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait. JVMDUMP032I JVM requested Heap dump using '/home/sathish/jetty6/heapdump.20110417.114115.18926.0001.phd' in response to an event JVMDUMP010I Heap dump written to /home/sathish/jetty6/heapdump.20110417.114115.18926.0001.phd JVMDUMP032I JVM requested Heap dump using '/home/sathish/jetty6/heapdump.20110417.114115.18926.0002.phd' in response to an event JVMDUMP010I Heap dump written to /home/sathish/jetty6/heapdump.20110417.114115.18926.0002.phd JVMDUMP032I JVM requested Heap dump using '/home/sathish/jetty6/heapdump.20110417.114115.18926.0003.phd' in response to an event JVMDUMP010I Heap dump written to /home/sathish/jetty6/heapdump.20110417.114115.18926.0003.phd JVMDUMP032I JVM requested Java dump using '/home/sathish/jetty6/javacore.20110417.114115.18926.0004.txt' in response to an event JVMDUMP010I Java dump written to /home/sathish/jetty6/javacore.20110417.114115.18926.0004.txt

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • Installing SSL certs with nginx on Amazon EC2

    - by Ethan
    I finally got a cert from an authority and am struggling to get things working. I've created the appropriate combined certificate (personal + intermediate + root) and nginx is pointing to it. I got an elastic IP and connected it to my EC2 instance. My DNS records point to that IP. But when I point the browser to the hostname, I get the standard "Connection Untrusted" bit, with ssl_error_bad_cert_domain. Port 443 is open - I can get to the site over https if I ignore the warning. Weird thing is, under technical details, it lists the domain I tried to access as valid! When I try and diagnose with ssl testing sites, they don't even detect a certificate! What am I missing here? domain is yanlj.coinculture.info. Note I've got coinculture.info running on a home server without a dedicated IP and have the same problem, but I'll be moving that to the same EC2 instance as soon as I figure this thing out. I thought the elastic IP would solve things but it hasn't

    Read the article

  • Allowing a private subnet EC2 access to the internet - Amazon AWS

    - by Xavier Hutchinson
    I have a VPC "VPC with Public and Private Subnets" created via the VPC wizard which should include NAT for the private subnet VPCs however it's not working. They are unable to browse the internet, resolve internet names and ping internet IPs. This is a stock standard conf, I am very sure of that so I am unsure why it's not working. Perhaps there was something additional I am supposed to do that I don't know about? Thank you, Xavier.

    Read the article

  • Amazon EC2 - Unable to connect to MySQL

    - by alexus
    I'm having issue connecting from one VM to another # nmap -p3306 ip-XX-XX-XX-XX.ec2.internal Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-10 17:50 EDT Nmap scan report for ip-XX-XX-XX-XX.ec2.internal (XX.XX.XX.XX) Host is up (0.000033s latency). PORT STATE SERVICE 3306/tcp closed mysql Nmap done: 1 IP address (1 host up) scanned in 1.05 seconds # in my Security Group I allowed Inbound connectivity via port TCP, portrange 3306 and Source 0.0.0.0/0, so theoratically it should work, but in reality it doesn't( I'm running red hat enterprise linux 7 on both VMs. mariadb.service running fine on another VM and I am able to connect to it locally. DB's: # netstat -anp | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2324/mysqld # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # Any ideas what else I missed?

    Read the article

  • Git and Amazon EC2 public key denied

    - by MrNart
    I had git working before on /var/html/projectfolder and realized it was a security risk so I made a new folder /projects from the root folder and tried to replicate what I did and now it doesnt work. Here is the backlog of what I did for my local machine and EC2 - server Server-EC2 1.I added my public key to the authorized_user file in ~/.ssh folder 2.Create a bare repository git init --bare 3.Change folder permissions to sudo chgrp -R ec2-user * sudo chmod -R g+ws * Local Machine create a local repository with git init touch, add, commit readme file pointed origin master to ec2 via git remote add origin ssh://ec2-user@remote-ip/path/to/folder This is my output: Permission Denied (publickey) fatal: The remote end hung up unexpectedly

    Read the article

  • Storing bundled AMI:s at Amazon EC2

    - by Industrial
    Hi everybody, I am totally new on configuring servers and working with EC2, so please bare with me. I managed after a lot of hair pulling to get a server with Ubuntu up and running with memcached and some other goodies that would make a great package for me. I thought that however, when storing it as an AMI with this tool I would be able to have memcached available next time I launched an instance based upon that image. What can I do to make sure that my configuration is saved properly to an instance? Question number two: - Can I someway make a command that is automatically run on server creation, like initiating memcache with "memcache -d -m 1700 -u root" or even a batch of them?

    Read the article

  • OpenVPN client on Amazon EC2

    - by Matt Culbreth
    I have an account with an OpenVPN service, and I'd like to get that running on my EC2 instance running Ubuntu 12.04. I have my config file in /etc/openvpn, and it connects fine when I run sudo openvpn --config matt.ovpn. However, I then lose connectivity to the EC2 machine, and I can't SSH back to it until I reboot. Previously I have done things like sudo ip rule add from IP_ADDRESS table 10 and then sudo ip route add default via GATEWAY_IP table 10, but that's not working on EC2. Any ideas? My private IP address right now is 10.209.29.XXX and my gateway is 10.209.29.1.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >