Search Results

Search found 2978 results on 120 pages for 'amazon aws'.

Page 40/120 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • World Class Training For Them, an Amazon Gift Certificate For You

    - by Adam Machanic
    We have just two weeks to go before Paul Randal and Kimberly Tripp touch down in the Boston area to deliver their famous SQL Server Immersions course . This is going to be a truly fantastic SQL Server learning experience and we're hoping a few more people will join in the fun. This is where you come in: we have a few vacant seats remaining and we need your help spreading the word. Simply tell your friends and colleagues about the course and e-mail me (adam [at] bostonsqltraining [dot] com) the names...(read more)

    Read the article

  • With the attachment_fu rails plugin, is there any way to delete files uploaded to Amazon S3?

    - by Eric Nguyen
    Let's say I'm using attachment_fu to attach profile pics to user profiles in a system, with Amazon S3 used as the actual file storage. When users upload new profile pics, I'd like to replace the attached file with the new one. I can do this within my database (i.e. the file metadata) easily, but attachment_fu doesn't seem to provide methods for deleting the files from S3. Am I missing something, or am I approaching this the wrong way? Many thanks!

    Read the article

  • Which is better, save a website images on Amazon S3, Flickr or Picasa?

    - by Amr ElGarhy
    I have a website and going to extend it, so users will upload their images on this website, i want to save users images in another storage service. Users will save images and view them, and also share with others. I know that i can do that using Amazon S3, Flickr or Picasa. But i want to know which is better than which? which one should i use and why? Based on your experiences, can you recommend one, or advice me if you a better service than those 3?

    Read the article

  • HTTP not working EC2 instance with own domain name

    - by bogdanvursu
    I have this problem I've already posted on the Amazon AWS forum. Unfortunately I haven't got a clear answer I and I was hoping you guys could help. Here's the link: http://developer.amazonwebservices.com/connect/thread.jspa?messageID=198238#198207 Basically I don't know why after associating an Elastic IP address and mapping it to one of my domains, FTP an ping work fine, but HTTP does a 302 redirect to the Amazon AWS hostname I had before associating the Elastic IP address. Here's the question from the AWS forum: I have an EC2 instance with HTTP and FTP installed. They both worked. Then I associated an Elastic IP address to that instance. Then I mapped that IP address to a name which is a subdomain of a domain I own. I think it's an A name (I didn't do the mapping personally). Now FTP works and HTTP doesn't. The AWS host name before the Elastic IP association: ec2-184-73-27-8.compute-1.amazonaws.com The AWS IP address and host name after the association: 174.129.7.254 and ec2-174-129-7-254.compute-1.amazonaws.com The domain which is mapped to 174.129.7.254 using an A record is: demo.flashxml.net FTP works means that I can connect to both 174.129.7.254, ec2-174-129-7-254.compute-1.amazonaws.com and demo.flashxml.net. HTTP doesn't work means that a HTTP request to 174.129.7.254, ec2-174-129-7-254.compute-1.amazonaws.com or demo.flashxml.net returns a 302 redirect to ec2-184-73-27-8.compute-1.amazonaws.com Here is my VirtualHost file: <VirtualHost *:80> DocumentRoot /home/ec2-user/public_html/wordpress ServerName demo.flashxml.net ErrorLog logs/ec2-user-error_log <Directory /home/ec2-user/public_html/wordpress> AllowOverride FileInfo Order Deny,Allow Allow from All </Directory> </VirtualHost> I finally figured out what was wrong. It's the fact that I installed Wordpress on the server using the hostname provided by Amazon. After associating the Elastic IP and updating the DNS records, the server was reachable - FTP working was the proof of that. The 302 redirect when accessing via HTTP was caused by Wordpress's hostname settings. So, what I've learned from all this was that I should setup my IP and DNS first and only after that install Wordpress or any other web app(s).

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • What are the options for hosting a small Plone site?

    - by Tina Russell
    I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • Permission denied on /dev/xvdf despite sudo?

    - by Sid
    I'm trying to get a stream off port 9999 and write to /dev/xvdf. I'm using Amazon EC2 with the Ubuntu 11.10 server image. By default I log in as 'ubuntu' which has sudo privileges. However, when I run the following netcat command, I get this error ubuntu@ip-10-252-35-122:~$ ls -al /dev/xv* brw-rw---- 1 root disk 202, 1 2011-11-30 22:22 /dev/xvda1 brw-rw---- 1 root disk 202, 80 2011-11-30 22:27 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo netcat -p 9999 -l > /dev/xvdf bash: /dev/xvdf: Permission denied ubuntu@ip-10-252-35-122:~$ Any idea why I get the permission denied error and how I can work around it? Update: Something mysterious is running in the background that resets the permissions? Check the snippet below and the permission flags seem to automatically reset when I try using /dev/xvdf !?! ubuntu@ip-10-252-35-122:~$ sudo chmod 777 /dev/xvdf ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brwxrwxrwx 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ sudo nc -p 9999 -l > /dev/xvdf This is nc from the netcat-openbsd package. An alternative nc is available in the netcat-traditional package. usage: nc [-46DdhklnrStUuvzC] [-i interval] [-P proxy_username] [-p source_port] [-s source_ip_address] [-T ToS] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [hostname] [port[s]] ubuntu@ip-10-252-35-122:~$ ls -al /dev/xvdf brw-rw---- 1 root disk 202, 80 2011-11-30 22:43 /dev/xvdf ubuntu@ip-10-252-35-122:~$ I'm using stock Ubuntu Amazon EC2 images from http://alestic.com/ (links to images at the top)

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • s3fs Input/output error

    - by shadow_of__soul
    i'm trying to set up a backup system with s3fs and the amazon s3 service. i followed this 2 guides: http://qugstart.com/blog/linux/how-to-mount-an-amazon-s3-bucket-as-virtual-drive-on-centos-5-2/ http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/ anyway making a tail to the /var/log/messages i get: Aug 28 13:37:46 server s3fs:###response=403 i already tried creating the authentication file on /etc/passwd-s3fs and setting there the access and private key, passing it trough the command line, i checked several times the credentials and i used it with s3fox, and is working. i also have set the time of the machine (with the date command) to be the same as the amazon S3 servers (i got the time of the S3 server uploading a file with the file manager) not only rsync don't work, commands like ls or cp in the /mnt/s3 didn't work also. any help of how i can solve/debug this? Regards, Shadow.

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • Load testing nginx inside AWS

    - by andy
    I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just using a single load generator machine.

    Read the article

  • Route53 only for wildcard subdomain

    - by Philippe Gerber
    We recently moved our web application to AWS. One thing that is still managed by our old hoster is DNS. OLD HOSTER example.com. NS <Old hoster's name server> example.com. A <ElasticIP on EC2 instance> *.example.com. CNAME example.com. ... I'm now trying to setup and play around with Route53 and use it for name resolution of our EC2 instances. ROUTE53 web-01.aws.example.com. CNAME ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com. web-02.aws.example.com. CNAME ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com. ... Now my question: Is it possible to forward DNS queries for *.aws.example.com to Route53 (ns-xxxx.awsdns-59.co.uk.)? What kind of record would I have to add?

    Read the article

  • mysql show databases not showing databases that are in /opt/bitnami/mysql/data directory

    - by hgolov
    and thank you for taking the time to look at my question. I have an ebs-backed ec2 ubuntu server which is running but unreachable. ** There are very stupidly no recent backups ** I made a snapshot of the block, created a volume, spun up a new instance, attached the new volume. I see all the data from my site in the /opt/bitnami/mysql/data directory, but when I go into the mysql console, it shows only information_schema and test when I type show databases; How can I 'point' mysql to the correct folder? Thank you!

    Read the article

  • How to hide Amazon webstore default toolbar with Prototype?

    - by melaos
    Hi Guys, after playing around this morning, i've found that there's this default chunk of html code in the amazon webstore which will add a toolbar on top of the page. the html looks like below: <td id="wba_logo_bg"> <table class="logo" border="0" cellpadding="0" cellspacing="0" width="100%"> <tbody><tr><td align="left"></td> <td class="wba_account" style="padding: 5px;" align="right" valign="top"> <table border="0" cellpadding="0" cellspacing="0"> <form action="#" id="searchForm" method="get" name="searchForm"></form> <tbody><tr><td class="wba_account_link"> <a xmlns:xhtml="http://www.w3.org/1999/xhtml" class="myAccountNav" href="#" onclick="return false;">home</a></td> <td class="myAccountDots"></td> <td class="wba_account_link"><a class="myAccountNav" href="#" onclick="return false;">view cart</a></td> <td class="myAccountDots"></td><td class="wba_account_link"><a class="myAccountNav" href="#" onclick="return false;">my account</a></td> <td class="myAccountDots"></td><td class="wba_account_link"><a class="myAccountNav" href="#" onclick="return false;">order status</a></td> <td><img src="pageEditor_files/1_pixel.gif" hspace="7"></td> <td><input name="keyword" tabindex="1" type="text"></td> <td><img alt="Search" class="wba_search_btn" onclick="return false;" onkeyup="if (13==event.keyCode) searchForm.submit();" src="pageEditor_files/btn_search.gif" style="cursor: pointer;" tabindex="2" title="Search" hspace="3"> </td></tr></tbody> </table> </td></tr></tbody> </table> </td> and thus far i was able to use prototype to find those with the class name of wba_account_link and hide them via the codes below: function hideAmazonToolbar() { $("wba_logo_bg").hide(); }//end function but what i really want to do is preferably to hide the whole tbody instead, but with my limited prototype skills, i don't really know how to do this. can anybody point me to the right resources on how to get this done? thanks! EDIT Went higher up and apparently there's a td with an id, and solve it using prototype hide function! man, i love javascript framework :) thanks :)

    Read the article

  • Is it possible to gzip and upload this string to Amazon S3 without ever being written to disk?

    - by BigJoe714
    I know this is probably possible using Streams, but I wasn't sure the correct syntax. I would like to pass a string to the Save method and have it gzip the string and upload it to Amazon S3 without ever being written to disk. The current method inefficiently reads/writes to disk in between. The S3 PutObjectRequest has a constructor with InputStream input as an option. import java.io.*; import java.util.zip.GZIPOutputStream; import com.amazonaws.auth.PropertiesCredentials; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3Client; import com.amazonaws.services.s3.model.PutObjectRequest; public class FileStore { public static void Save(String data) throws IOException { File file = File.createTempFile("filemaster-", ".htm"); file.deleteOnExit(); Writer writer = new OutputStreamWriter(new FileOutputStream(file)); writer.write(data); writer.flush(); writer.close(); String zippedFilename = gzipFile(file.getAbsolutePath()); File zippedFile = new File(zippedFilename); zippedFile.deleteOnExit(); AmazonS3 s3 = new AmazonS3Client(new PropertiesCredentials( new FileInputStream("AwsCredentials.properties"))); String bucketName = "mybucket"; String key = "test/" + zippedFile.getName(); s3.putObject(new PutObjectRequest(bucketName, key, zippedFile)); } public static String gzipFile(String filename) throws IOException { try { // Create the GZIP output stream String outFilename = filename + ".gz"; GZIPOutputStream out = new GZIPOutputStream(new FileOutputStream(outFilename)); // Open the input file FileInputStream in = new FileInputStream(filename); // Transfer bytes from the input file to the GZIP output stream byte[] buf = new byte[1024]; int len; while ((len = in.read(buf)) > 0) { out.write(buf, 0, len); } in.close(); // Complete the GZIP file out.finish(); out.close(); return outFilename; } catch (IOException e) { throw e; } } }

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • IPtables AWS EC2 NAT/Reverse NAT - For Reverse Proxy style setup but with IPtables

    - by Mark
    I was thinking initially needing to do a reverse proxy or something so I could get some SSL/TLS traffic look like it is being terminated at a server and IP address in the AWS cloud, and then that traffic is forwarded onto our actual web servers that aren't in the cloud... I've not done much iptables pre and post routing before Dnat or Snat which I know are the things I need or a combination of the things I need in order achieve what i'm trying. Things to note:- Client/User - Must not be able to see backend IP address and only see the IP address of the cloud box https (TLS/SSL) - connection shouldn't be terminated at the cloud box, it should act like a router almost EC2 instance - Has only one network interface available to play with... this is thus an (internet <- internet) type of routing going on. EC2 instance IP address is already more or less behind a NAT that I have no control over, for example... Public ip address could be 46.1.1.1 but instance IP will be 10.1.1.1. Connections from client will go to 46.1.1.1 which will end up at the instance and on interface 10.1.1.1. The connection from the client then needs to be forwarded (DNAT) onto the backend web servers which are back out on the internet (SNAT). Possibly a part of the problem could be that the SNAT will need to be set to the external interface of the instance and I wonder if this makes it harder for IPtables to track the connection? So looking to basically, have it look as though connections are terminating at this server and its IP address. Whereas all that's really happening is the https request and connection is being forwarded straight onto another internet facing web server. How possible does that sound?

    Read the article

  • EC2 instance store cloning or to ebs via gui management console

    - by devnull
    I have found similar questions here but the answer are either outdated or are from the command line. The case is this. I have an EC2 instance using instance store (this was the only AMI available for Debian 6 in Ireland). Now through the AWS GUI I can do a snapshot of the instance volume and/or even create a volume. But an image made from the snapshot doesn't boot. What is the best solution to either clone an EC2 instance that uses instance store OR from the created snapshot of the instance store to launch a new EBS instance (identical clone) FROM the gui aws management console and not command line ? Before turning this down consider that there is not similar question on how to do it via the aws management console. hint can't be done is not an appropriate answer. As you can create a snapshot of the instance store backed instance and/or a volume and create an AMI from that snapshot.

    Read the article

  • Incremental backups from Rackspace Cloud Files to Amazon Glacier

    - by Martin Wilson
    Is there a software product/module (open-source or commercial) that can provide incremental backups from Rackspace Cloud Files to Amazon Glacier? We are looking for something that will provide the following functionality (or achieve the same result, i.e. a cost-effective backup strategy for files stored in Rackspace Cloud Files): Work out which files have been added to or modified in a Rackspace Cloud account (since the last backup). Create a ZIP (or similar) of these files and store them in Amazon Glacier. Keep a record of which files are in which ZIPs. Ideally, restore either a single file or all files from Glacier back into Rackspace.

    Read the article

  • CloudFormation - How to start a Windows Service with cfn-init

    - by Edwin
    I'm creating a CloudFormation Stack that will install and start a service on a Windows Instance. I've figured out how to install the service, but how do I start the service using cfn-init? The examples seem to all use linux, as there is a reference to "sysvinit" How do I structure AWS::CloudFormation::Init so that cfn-init will start windows services after installing them? Do I leave in the sysvinit, replace it with something else, take it out? ps: I'm referring to how to start services by providing information to AWS::CloudFormation::Init.services. Also, It would be nice to know how "packages" work for windows. AWS's announcemnet says that packages are supported on Windows but there is no Windows specific documentation

    Read the article

  • EC2 instance store cloning or to ebs via guy management console

    - by devnull
    I have found similar questions here but the answer are either outdated or are from the command line. The case is this. I have an EC2 instance using instance store (this was the only AMI available for Debian 6 in Ireland). Now through the AWS GUI I can do a snapshot of the instance volume and/or even create a volume. But an image made from the snapshot doesn't boot. What is the best solution to either clone an EC2 instance that uses instance store OR from the created snapshot of the instance store to launch a new EBS instance (identical clone) FROM the gui aws management console and not command line ? Before turning this down consider that there is not similar question on how to do it via the aws management console. hint can't be done is not an appropriate answer. As you can create a snapshot of the instance store backed instance and/or a volume and create an AMI from that snapshot.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >