Search Results

Search found 3627 results on 146 pages for 'amazon ses'.

Page 63/146 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • ec2 ami device mapping

    - by hortitude
    I have large ec2 Ubuntu image and I'm just looking through the devices. I noticed from the metadata that % curl http://169.254.169.254/latest/meta-data/block-device-mapping/ami sda1 % curl http://169.254.169.254/latest/meta-data/block-device-mapping/ephemeral0 sdb However when I look what is actually mounted there is /dev/xvda1 and /dev/xvdb (and there is no /dev/sd* ) I know that both names look somewhat valid from the AWS documentation, but it looks to me from this like there is a mismatch in the instance metadata and what is actually on the machine. Why don't they match?

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Optimal Instance Size for EC2 Sharepoint Server

    - by Rob Wilkerson
    I'm surprised that I can't find any info about this, but I'm not a Windows admin and just a novice EC2 user. I have a client who wants to stand up a Sharepoint server on EC2 for internal use. The team is small (10-20) folks and traffic will be light. Mostly, the client is looking for one place to store documents (and revisions of documents) while making access easy for authenticated users anywhere in the world. They've settled on Sharepoint and have other EC2 instances so that seems like the natural fit, but I'm trying to figure out what to recommend for them. I'm currently thinking about a Medium instance. I'm afraid to go smaller because I think Windows would need a fair amount of memory just to run, but I'm very open to suggestions. Any advice would be much appreciated. I expect that the storage itself would happen in an EBS mount, but again, suggestions welcome. Thanks for your input.

    Read the article

  • Has ec2 made self-hosting possible for 'amateur' sysadmins possible?

    - by Blankman
    I'm a developer, and it seems ec2 has made it possible for a amateur sysadmin like me to setup and maintain a fairly large set of servers. Now I don't mean to undermine real sys admins, as I know the value of them but what I am trying to get at is that someone like me can setup and maintain a cluster of servers (front end web servers, with some db servers) using tools like ec2 and capistrano with the help of google. Now this isn't something I would do as a long term thing, but as a startup, one-man operation, I think I can pull this off until business takes off and I can hire this important role out. With ec2, I get my firewall, so I basically open up port 80 on my public facing server, which will run haproxy and route requests to my cluster of servers. Ofcourse I am simplifying the setup, but just want a feel for what you guys think about my perception. My application is a web application, that will be runing Ruby on rails (passenger) and talking to mysql or postgresql.

    Read the article

  • Windows 2003 server RRAS on VPC

    - by Saif
    I'm trying to setup a L2TP VPN server(to give user access on to all my VPN instance) on a Windows 2003 instance running on my VPC. While trying to enable RRAS I'm getting error, "less than two network interfaces were detected on this machine". Eventually it's because there's only one network interface available, the which has private IP. I have elastic IP assigned to this instance as well. But RRAS can't see this. What should I do to RRAS to be able to see the interface with elastic IP?

    Read the article

  • I setup vsftpd on ubuntu server on my ec2 instance, how to connect using SSH?

    - by Blankman
    I connect to my ec2 instance using ssh so I don't have to login each time. I just installed vsftpd on the ubuntu server, but when I connect it obviously asks for my username and password. Since I connect using the ubuntu user that my AMI comes with, I don't even know the root password. Is there a way I can login via ftp using SSH? Or do I just create a user on the system for ftp purposes? I've locked ftp to my IP address, and I will shutdown the ftp service once I'm done as I dont need it running 99.99999% of the time.

    Read the article

  • HAProxy and Intermediate SSL Certificate Issue

    - by Sam K
    We are currently experiencing an issue with verifying a Comodo SSL certificate on an Ubuntu AWS cluster. Browsers are displaying the site/content fine and showing all the relevant certificate information (at least, all the ones we've checked), but certain network proxies and the online SSL checkers are showing we have an incomplete chain. We have tried the following to try to resolve this: Upgraded haproxy to the latest 1.5.3 Created a concatenated ".pem" file containing all the certificate (site, intermediate, w/ and w/out root) Added an explicit "ca-file" attribute to the "bind" line in our haproxy.cfg file. The ".pem" file verifies OK using openssl. The various intermediate and root certificates are installed and showing in /etc/ssl/certs. But the checks still come back with an incomplete chain. Can anyone advise about anything else we can check or any other changes we can make to try to fix this? Many thanks in advance... UPDATE: The only relevant line from the haproxy.cfg (I believe), is this one: bind *:443 ssl crt /etc/ssl/domainaname.com.pem

    Read the article

  • aws s3 works with script but not on cron

    - by user3800017
    guys.. My first post ! hope not the last .. I have few bunch of servers on aws ec2 platforms. I made a simple script to backup my custom logs on their s3 storage bucket. The problem is the script works fine .. but I tried to add it to the crontab. And the script executes but not the s3 sync/mv part ! Here is my code: NOW=$(date "+%b_%d_%Y") MY_HOSTNAME=`uname -n` mv /opt/req/req* /opt/req/bkup/ mv /opt/response/res* /opt/req/bkup/ cd /opt/req/bkup/ tar -cvf ${MY_HOSTNAME}_req_bkup_${NOW}.tar re* rm *.txt aws s3 mv /opt/req/bkup/* s3://req `

    Read the article

  • "Cannot allocate memory" while no process seems to be using up memory

    - by omat
    I am not competent on server issues, any help is much appreciated. When try to start a python/django shell on a linux box, I am getting OSError: [Errno 12] Cannot allocate memory. free -m seems to confirm I am out of memory: total used free shared buffers cached Mem: 590 560 29 0 3 37 -/+ buffers/cache: 518 71 Swap: 0 0 0 But I cannot see what is eating up the memory with top or ps aux: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24336 908 0 S 0.0 0.2 0:00.68 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:04.85 ksoftirqd/0 How can I identify the leak? Thanks. BTW, I am not sure if it is relevant, but the machine I am talking about is an AWS EC2 instance with Ubuntu 12 running.

    Read the article

  • s3cmd run on command line not on cron

    - by Jonar
    Many have said that the problem is with environment but I still can't seem to solve this problem. BTW I am using Ubuntu 9.10 login as user, then sudo -s using this command: s3cmd put file s3://bucket worked! now here is the simple script intended for testing: #! /bin/bash env >/tmp/cronjob.log s3cmd put file s3://bucket issuing the command crontab -e * * * * * /opt/script 2>&1 | logger Then using tail to syslogs Dec 3 23:22:01 ubuntu CRON[10795]: (root) CMD (/opt/script 2&1 | logger) But by verifying it on s3Fox Organizer, the file is not uploaded. (I tried changing the #! /bin/sh (no effect), putting crons on /etc/crontab (no effect), setting HOME=/home/user (no effect) What are other options to try? Or other ways to debug this problem. Thanks

    Read the article

  • ls hangs after NFS server reboot

    - by Apikot
    I've got server A and server B. B acts as an nfs server, A mounts from B. Both are running on EC2. Sometimes I have to shut down B and start a new instance (identical instance). After B is back up, trying to do anything inside the mounted directory on A (ls for example) just hangs. I'm trying to set up a cron that checks the status of the mount, and remounts if anything is wrong. Is there any way to check the status of a mount?

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • Can I use a micro ec2 instance as a load balancer for my other large ec2 instances?

    - by Ryan Detzel
    The issue I'm having is I want to upgrade that instance often(security patches, etc) but I'm affriad something will fail and the site will be down. So, I want to have another server setup and load balance between the two that way I can easily disable one, upgrade it and once it's working add it back in the mix and repeat. What kind of machine is needed for a load balancer? Would the micro instance work just fine? The site gets anywhere from 3-10k hits/day. I plan on using nginx as the load balancer.

    Read the article

  • MMS gets hostname from uname and can't connect to it

    - by Adam Monsen
    I'm trying to get 10gen's MongoDB Monitoring Service monitoring my 3-node replica set. The replica set running in an AWS VPC. Each node runs on a different [virtual] machine. Assume their IPs are 192.168.1.1 (primary or secondary), 192.168.1.2 (primary or secondary), 192.168.1.3 (arbiter). From a quick look at the source, MMS appears to get the hostname of the machine it is running on like so: platform.uname()[1] For my VPC EC2 instance, this returns something like ip-192-168-1-1 MMS then tries to connect to this hostname, which does not resolve. I'd rather just use IP addresses (since they're always static), but it seems like the hardcoded use of platform.uname()[1] in mmsAgent.py precludes that. So, what's an elegant way out of this? Hack /etc/hosts? I'm not setting up a DNS server just for this. Maybe I'm just misunderstanding how to configure MMS.

    Read the article

  • Am I using too much memory? (Rails on EC2 with Resque)

    - by Stpn
    I am looking at the memory usage of the Rails application (it uses background processes via Resque) and since the common answer to the question, "how many workers is too many" was "test and see", I ran some memory commands and wonder if someone can help figuring if the memory usage is high enough already, or I can still add some extra workers.. so (this is all under the maximum load): $ free -t -m total used free shared buffers cached Mem: 1756 1532 223 0 12 229 -/+ buffers/cache: 1291 464 Swap: 895 10 885 Total: 2652 1543 1108 $ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 10588 156172 13400 326476 1 6 4 0 5 4 1 0 99 0 If there is any extra info I can provide to help answer this, I would be happy to do so. If the question is strange in some way, please let me know I'd be glad to fix etc..

    Read the article

  • How to combine AWS and dedicated external servers?

    - by rfw21
    I have an extensive network of servers all currently hosted on AWS EC2. For reasons of cost I plan to gradually migrate to dedicated servers where possible. So: How can I best combine AWS and non-AWS servers in my network? Ideally, I should be able to assign internal IP addresses to the external servers, include them in AWS security groups and ensure that all private traffic between my AWS servers and external servers is secure.

    Read the article

  • route53 for multiple identical domains

    - by Yaniv Aknin
    My main domain is example.com, but also bought example.org and example.net. I've configured my webservers at *.example.com to handle requests from the other domains and redirect them correctly to example.com, but I'd rather not re-configure all my DNS records at example.org and example.net to be the same as example.com. Other than writing some ugly synchronization script, what should I do to have route53 answer queries against my "other" domains with the same data from the "main" domain?

    Read the article

  • How frequent are network partitions on cloud services?

    - by roja
    Much is made of the CAP trade-off for data storage where conflicts can be introduced if there is a network partition. My question is there any evidence that this is a problem that arises with any significant frequency in modern cloud IAAS services e.g.; EC2, Azure, Rackspace. Is it a problem which, despite being a theoretical roadblock in constructing idealised distributed systems is, in fact, a non-issue for all practical concerns? Has anyone experienced a network partition within one of these systems (within a single data-centre?) If so would you be willing to share any details?

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • How to determine if my AWS/EC2 server has been compromised / resolution?

    - by ElHaix
    I have recently seen an increase in network in/out activity on my server and am trying to determine if my AWS/EC2 instance has been compromised, and if so, how to resolve? In my security group I have: Inbound: 80 (HTTP) 0.0.0.0/0 Outbound: 80 (HTTP) 0.0.0.0/0 443 (HTTPS) 0.0.0.0/0 Using TCP-UDP Endpoint Viewer: I see a lot of w3wp.exe TCP processes with varying local ports http and numbered, as well as varying remote ports. Some processes go red/yellow/green on updates . I see Remote address for most w3wp processes are my ec2 instance, however I am seeing several to *.deploy.akamaitechnologies.com and *.deploy.static.akamaitechnologies.com with received bytes varying between 4-11 megs. I also see Ec2Config.exe, remote address: 169.254.169.254 System Process Remote Address: fetcher4-4.p.mail.ru (how can I get rid of this one?!) local port: http remote port: 33432 I am also seeing some system processes from 114.216-244-93-rdns.wowrack.com: Protocol: TCP local port: http remote port: varying As well as some baiduspider "System Process"'s. I'm afraid that my system may have been compromised, and wondering if these results are any indication of that. If so, how can I get eliminate these possible threats? I have MS Security Essentials installed.

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • On-call EC2 System Administrator

    - by Ball
    Is there a company that can respond to critical alerts for our EC2-based web application? Does such a service exist? If not, is there a place where we can find individuals who are willing to do 5-10 hrs of work a month responding to issues? Thanks. I know this is not a technical question, but I know it's a useful question for many companies and I'm not sure where else to ask.

    Read the article

  • Rebooting an EC2 Instance

    - by ABrown
    I'm working on a project involving EC2 and I'm having a difficult time wrapping my head around this concept. With EC2 instances, will non-EBS backed volumes (standard EC2) survive a reboot of the OS? For example, I have an Ubuntu instance. If I type "/sbin/shutdown -r now", will I lose all data on the drive not in the AMI? I understand that if I terminate the instance via the tools or the control panel, I lose everything, but I can't find a concrete answer to the restart issue. An extra gold star goes to anyone who can link to documentation clearly explaining this. ;) Thanks for your time!

    Read the article

  • How to transfer 4.4 GB of user images to S3?

    - by April
    What would be the best way to transfer 4.4 GB of user images to S3? I would prefer to somehow directly trasnfer images from my current production server to a S3 bucket and without having to download images to my home mahchine first and then upload it to a S3 bucket. Thanks.

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >