Search Results

Search found 2841 results on 114 pages for 'amazon cloudformation'.

Page 16/114 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • High CPU Steal percentage on Amazon EC2 Instance

    - by Aditya Patawari
    I am experiencing high CPU steal percentage in a Amazon EC2 large instance. I know it means that my virtual CPU is waiting on the real CPU of the machine for time. My question is that what can I do to reduce this percentage and get maximum out of the CPU? Steal percentage is consistently at 20%. System load crosses 10 when this happens. I have checked memory and network and I am sure that they are not the bottleneck. Is that normal for such environment? Also are there any system level optimization techniques for reducing steal percentage form the virtual instance? avg-cpu: %user %nice %system %iowait %steal %idle 52.38 0.00 8.23 0.00 21.21 18.18

    Read the article

  • Upgrading Fedora on Amazon to 12 but getting libssl.so.* & libcrypto.so.* are missing

    - by bateman_ap
    I am upgrading to Fedora 12 on a Amazon EC2 using help here: http://www.ioncannon.net/system-administration/894/fedora-12-bootable-root-ebs-on-ec2/ I managed to do a 64 bit instance OK, however facing some problems with a standard one. On the final bit of the install from 11 to 12 I am getting an error: Error: Missing Dependency: libcrypto.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) Error: Missing Dependency: libssl.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) This is referenced in the comments from the link above but all it says is: Q: Apache failed, or libssl.so.* & libcrypto.so.* are missing A: These versions are mssing the symlinks they require. Easy fix, go symlink them to the newest versions in /lib However I am afraid I don't know how to do this. If it is any help I tried running the command locate libssl.so and got: /lib/libssl.so.0.9.8b /lib/libssl.so.6

    Read the article

  • How does Amazon ec2-user get its sudo rights

    - by Johan
    I am looking for where the default Amazon AMI linux image sets up the privileges for the default ec2-user account. After logging in with this account I can use sudo successfully. Checking via the sudoers file, which I open by running visudo (with no other options) I see a few default settings and permissions for root ALL ALL So ... Where is the permissions for ec2-user assigned? I have not yet tried to add a new permission but ultimately I want to resign ec2-user for systems management tasks and use a non-full root user for administering the applications (stop and start mysql, httpd, edit apache's vhost files, and upload / edit web content under the web root)

    Read the article

  • "Permission denied (publickey)" error when ssh'ing to Amazon EC2 Debian AMI 05f3ef71

    - by user193537
    I have launched a Debian system using AMI 05f3ef71 in Amazon EC2, but I have no lock connecting to it using SSH as suggested in "Connecting to Your Linux/UNIX Instances Using SSH". I tried several user names: ec2-user, root, debian... None of them worked. I always get a Permission denied (publickey) error message. Using ec2-get-console-output instance_id as suggested doesn't work either, it requires option "-K". If I supply it, I get the error message Required option '-C, --cert CERT' missing, but I have no idea what to supply there. Port 22 is opened on the affected instance. Does anyone have an idea what I could try to log in to my instance?

    Read the article

  • Amazon EC2 as load balanced/failover solution

    - by sugiggs
    Hi All, I'm thinking of an idea but not sure the pros/cons of it. At the moment, we are hosting our website on a dedicated server. As a failover/load balanced solution, I'm thinking to use Amazon EC2+EBS. The files can be rsync and mysql can be setup as master-master replication When the load is high, I can up the machine, given sometime to "sync" and load balanced the traffic there. is it do-able? any link I can read more on this?

    Read the article

  • Amazon EC2 IPSEC

    - by John Qualis
    I have configured a ubuntu 12.04 64-bit server machine on Amazon AWS to act as a strongswan IPSEC server. I want to connect to it from my MAC OSX Lion's inbuilt IPSEC client. The OSX machine is in my home network. I log into the AWS machine using a ssh to ubuntu@public-ip and I provide the private RSA key in form of .pem file which I downloaded when the machine instance was created. The ssh connection works file but the IPSEC connection fails. What credentials/configurations should I provide for an IPSEC connection from my OSX client to AWS ubuntu server? My OSX machine is behind an ISP provided modem/router. Appreciate any help and thanks in advance

    Read the article

  • Amazon EC2 - Reserved Private Addresses

    - by reach4thelasers
    I've got a private DNS Server running on Amazon EC2. I don't need a public IP Address because its only used for private addressing web1.xxx.internal database1.xxx.internal Problem is I had to terminate the instance recently and start a new one. This meant that the private IP address of the DNS server changed and I had to log in to each of my other 15 server one-by-one and change the DNS address to point to the new DNS server. There must be a better way to do this, if so, what is it?

    Read the article

  • Mounting an Amazon EC2 instance on Mac OS X

    - by hinghoo
    I've got public key authentication working between my Mac OS X and an Amazon EC2 instance so that from the command-line I can just type the following and it works: ssh root@[IPAddressOfEC2Instance] The strange thing is that I can't seem to mount the instance using "Connect to Server" in the Finder. I've tried typing the following server addresses into the "Connect to Server" dialog: ftps://[IPAddressOfEC2Instance] ftps://root@[IPAddressOfEC2Instance] But all I get is You entered an invalid username or password. Please try again. The root user on the EC2 instance has a blank password and I'm wondering if it has to do with that. However, I can't change the password for the root user. I can use an SFTP client to connect to the machine, I just can't mount it with "Connect to server". It asks for a username and password (for a registered user) and it's root/[blank] which it doesn't accept. The other option is "Guest" which brings up an empty folder in the Finder.

    Read the article

  • Use Amazon EC2 as a backup server

    - by MikeMurko
    I would like to use Amazon EC2 as an emergency backup database+web server in the event our primary host becomes unavailable. I feel like I wouldn't have trouble setting up a Windows instance, install SQL Server and get the web server up and running (would take a few hours, plus installing various libraries, our source code, etc). My question relates to pricing. If I simply "stop" the instance rather than "terminate" it, does that stop counting "instance-hours"? I would prefer not to terminate the instance and lose all that work I spent setting it up. If I must "terminate" in order to stop the billing - is it possible to make an image of the server after I have set it all up, then save that image somewhere (S3?) Is this something that people do regularly? Ideally this instance would just be waiting in the wings for an issue with our host, but costing us nothing except perhaps data storage costs.

    Read the article

  • Amazon S3 - Storage Class and Server Side Encryption

    - by Steven
    Ahhh! I am using Amazon S3 for some low price storage to clear down out SAN. I created the bucket and created a root folder. I set the storage class to standard and server side encryption AES. I started a copy job to move the files, some files copied over and i checked the files: Reduced Redundancy Encryption set to none WTF? So i deleted all files and folders. I manuallyed created the folder structure and again set the storage class and encryption level. I coped some files and bamm, still showing (at a file level as Reduced and no encryption). So my question is this, is it really raid'd and encrypted just not showing it properly (as the root folder is, how can the file not be??) or (b) am i being a huge tool and missing something?

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • Is there a way of using HTTPS with Amazon's CloudFront CDN and CNAMEs?

    - by Metalshark
    We use Amazon's CloudFront CDN with custom CNAMEs hanging under the main domain (static1.example.com). Although we can break this uniform appearance and use the original whatever123wigglyw00.cloudfront.net URLs to utilise HTTPS, is there another way? Do Amazon or any other similar provider offer HTTPS CDN hosting? Is TLS and its selective encryption available for use somewhere (SNI: Server Name Indication)? Foot note: assuming that the answer is no, but just in the hope someone knows. EDIT: Now using Google App Engine https://developers.google.com/appengine/docs/ssl for CDN hosting with SSL support.

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • can not connect to SQL running on amazon ec2 machine

    - by njj56
    I am using SQL managment studio 2008 running on an Amazon EC2 machine. I am unable to connect to the database in my asp.net application. The EC2 instance has been set to accept connections over the SQL port. I am also able to remote the machine as well as view websites hosted on the server. Listed below is part of the connection string relating to this instance. When the program is ran and this connection string is called, it returns tcp error 0 - no return response. it just times out. <add name="ProjectServer" connectionString="Data Source=*IP ADDRESS HERE*,1433;Initial Catalog=*Catalog Name*;User ID=IP-0A6ED514\Administrator;"/> I removed the ip and the catalog name for the example, but I am sure they are correct. The only thing that I could think may cause an error, is the differences in names between the user id and the server name - the server name is ip-0A6ED514\sharepoint but the user name is ip-0A6ED514\administrator when I log into the sql server manager on the EC2 instance. A password is not used. Not sure if I would need to leave in a blank string for password - also not sure if the difference between server name and user id to log in makes a difference. Any help is appreciated. Thank you. update - when this connection string is used with out the port, i get tcp provider error 40 - when the port is in there, i get error 0 edit- the sql server is using windows authentication - does this make a difference? Usually I always use SQL server authentication

    Read the article

  • Enabling mod_rewrite on Amazon Linux

    - by L. De Leo
    I'm trying to enable mod_rewrite on an Amazon Linux instance. My Directory directives look like this: <Directory /> Order deny,allow Allow from all Options None AllowOverride None </Directory> <Directory "/var/www/vhosts"> Order allow,deny Allow from all Options None AllowOverride All </Directory> And then further down in httpd.conf I have the LoadModule directive: ... other modules... #LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so #LoadModule proxy_module modules/mod_proxy.so ... other modules... I have commented out all the Apache modules not needed by Wordpress. Still when I issue http restart and then check the loaded modules with /usr/sbin/httpd -l I get only: [root@foobar]# /usr/sbin/httpd -l Compiled in modules: core.c prefork.c http_core.c mod_so.c Inside the virtual host containing the Wordpress site I have an .htaccess containing: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The .htaccess is owned by apache which is the user apache runs under. The apachectl -t command returns Syntax OK What am I doing wrong? What should I check?

    Read the article

  • Amazon VPC NAT not working

    - by rpkelly
    I'm trying to create a NAT instance for my VPC to allow instances on private subnets connect to the internet (most importantly, S3). I tried following the instructions here: http://docs.amazonwebservices.com/AmazonVPC/2011-07-15/UserGuide/index.html?VPC_NAT_Instance.html . Unfortunately, the instances in the private subnet (call it 10.10.2.0/24) cannot reach the internet. I have done the following: Create a NAT instance (Amazon's ami-vpc-nat-1.0.0-beta.i386-ebs (ami-d8699bb1)) in public subnet (call it 10.10.1.0/24). Changed "Source / Dest Check" to disabled. Created a new entry in the default routing table (which is used by 10.10.2.0/24) and had it point to the ID of the newly created instance. Associated an Elastic IP address with the NAT instance. Allowed all outbound traffic on the security group of the NAT instance. Ensured that all traffic could pass between the two subnets. I've tried also doing this with an existing instance using iptables, but had no luck. And I have verified that sys.net.ipv4.ip_forward is 1, just in case anyone was wondering. And I still have no internet connectivity from the instances on 10.10.2.0/24. Does anyone have any suggestions?

    Read the article

  • Serving and caching content from Amazon S3 with Tomcat

    - by Rob
    Hi all, We're looking to serve a range of content using Amazon S3 as a store for the content and Tomcat to host the web application. The content is divided into free and paid for content. We intend to authenticate the users when they access the web application running in Tomcat. Based around their authentication we are able to tell if the user has access to paid for content or simply free stuff. So I envision the flow of a request being something like this: Authenticated request to Tomcat If user is "paid" user, display links to premium content Direct requests for paid content back through Tomcat to prevent direct access to it by non-paying users. Tomcat makes request to S3 through a web cache to keep our costs down Content is returned to user. As we have to pay for each request to S3, I'd ideally like to cache content locally to the Tomcat instance after it has been requested for the first time to keep costs to a minimum and to speed things up. I would also like to be able to invalidate this cache if we publish fresh content to S3. So to confirm my proposal: Client Request - Tomcat - Web Cache - S3 To invalidate the cache, I was thinking of using something like PubSubHubbub with the cache waiting for updates to the feed for content that it should invalidate. I'd appreciate some general feedback on this approach as I've no real experience of caching and I'm sure I've made some invalid assumptions. I'd also appreciate any recommendations for caching technologies. Thanks.

    Read the article

  • Amazon EC2 instance missing Network Interface

    - by Sergiks
    I am running Linux on a t1.micro instance at Amazon EC2. Once I noticed bruteforce ssh login attemtps from a certain IP, after litle Googling I issued the two following commands (other ip): iptables -A INPUT -s 202.54.20.22 -j DROP iptables -A OUTPUT -d 202.54.20.22 -j DROP Either this, or maybe some other actions like yum upgrade perhaps, caused the follwing fiasco: after rebooting the server, it came up without the Network Interface! I only can connect to it through AWS Management Console JAVA ssh client - via local 10.x.x.x address. Console's Attach Network Interface as well as Detach.. are greyed out for this instance. Network Interfaces item at the left does not offer any Subnets to choose from, to create a new N.I. Please advice, how can I recreate a Network Interface for the instance? Upd. The instance is not accessible from outside: cannot be pinged, SSH'ed or connected by HTTP on port 80. Here's the ifconfig output: eth0 Link encap:Ethernet HWaddr 12:31:39:0A:5E:06 inet addr:10.211.93.240 Bcast:10.211.93.255 Mask:255.255.255.0 inet6 addr: fe80::1031:39ff:fe0a:5e06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1426 errors:0 dropped:0 overruns:0 frame:0 TX packets:1371 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:152085 (148.5 KiB) TX bytes:208852 (203.9 KiB) Interrupt:25 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) What is also unusual: a new micro instance I created from scratch, with no relation to the troubled one, was not pingable too.

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Run a rails server on Amazon EC2 [on hold]

    - by Jashwant
    Context: I've tried rubber gem, but that does not fulfill my requirements ( I needed to deploy on existing instance, so don't recommend me rubber) So, I followed this excellent tutorial http://stackoverflow.com/questions/15535140/installing-ruby-2-0-and-rails-4-0-0beta-on-aws-ec2 Now, I have ruby 2.0 and rails 4.0.0 running on AWS EC2. I successfully ran the server with RDS (mysql) as db and default webrick as server ( Using command rails server ) But, I've read that webrick is a development server and shouldn't be used at production. What I tried: I googled and came up with some alternatives. Capistrano Nginx / apache with passenger Passenger with Capistrano Unicorn Puma My Question: What exactly is capistrano / passenger ? Are they middleware to ease my deployment process ? I don't see any difficulty in doing rails server command. If they are just middleware, nginx with passenger and capistrano does not make any sense ? Why would I add a learning curve ( to learn nginx, passenger and capistrano configs) just to run my server ? I can just use nginx to deploy my app. Can't I ? What combination should I use on Amazon EC2 (or may be at any some other production server).

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • SQL Express 2008 R2 on Amazon EC2 instance: tons of free memory, poor performance

    - by gravyface
    The old SQL Express 2005 was running on a low-end single Xeon CPU Dell server, RAID 5 7200 disks, 2 GB RAM (SBS 2003). I have not done any baseline measurements on the old physical server, but the Web app is used by half a dozen people (maybe 2 concurrently), so I figured "how bad can an Amazon EC2 instance be?". It's pretty horrible: a difference of 8 seconds of load time on one screen. First of all, I'm not a SQL guru, but here's what I've tried: Had a Small Instance, now running a c1.medium (High Cpu Medium) Windows 2008 32-bit R2 EBS-backed instance running IIS 7.5 and SQL Express 2008 R2. No noticeable improvement. Changed Page File from fixed 256 to Automatic. Setup a Striped Mirror from within Disk Management with two attached 1 GB EBS volumes. Moved database and transaction log, left everything else on the boot EBS volume. No noticeable change. Looked at memory, ~1000 MB of physical memory free (1.7 GB total). Changed SQL instance to use a minimum of 1024 RAM; restarted server, no change in memory usage. SQL still only using ~28MB of RAM(!). So I'm thinking: this database is tiny (28MB), why isn't the whole thing cached in RAM? Surely that would speed up performance. The transaction log is 241 MB. Seems kind of large in comparison -- has this not been committed? Is it a cause of performance degradation? I recall something about Recovery Models and log sizes somewhere in my travels, but not positive. Another thing: the old server was running SQL Express 2005. Not sure if that has any impact, but I tried changing the compatibility level from SQL 2000 to 2008, but that had no effect. Anyways, what else can I try here? Seems ridiculous to throw more virtual hardware at this thing. I know I/O is going to be rough on EBS volumes, but surely others are successfully running small .NET/SQL apps on reasonably priced instances?

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • DNS with name.com and Amazon S3

    - by aledalgrande
    I have a website on a bucket in Amazon S3, and recently started to get emails from Google "Googlebot can't access your site". When I go to Webmaster Tools and I try to fetch in fact it doesn't work. Also people in locations different from mine sometimes reported they could not access the website. Now for curiosity I tried from my terminal: $ host xxx xxx is an alias for xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com is an alias for s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com has address yyy.yyy.yyy.yyy And when I try with dig: $ dig xxx ; <<>> DiG 9.8.3-P1 <<>> xxx ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17860 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;xxx. IN A ;; ANSWER SECTION: xxx. 300 IN CNAME xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com. 60 IN CNAME s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com. 60 IN A yyy ;; Query time: 1514 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 12:32:13 2014 ;; MSG SIZE rcvd: 127 It seems OK to me. Why would Google tell me there is a DNS error? UPDATE: Google also cannot fetch robots.txt, but I can fetch it from my browser. UPDATE 2: I have a forwarding on the root to the www.* hostname: $ dig thenifty.me ; <<>> DiG 9.8.3-P1 <<>> thenifty.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49286 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;thenifty.me. IN A ;; AUTHORITY SECTION: thenifty.me. 300 IN SOA ns1hwy.name.com. support.name.com. 1 10800 3600 604800 300 ;; Query time: 148 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 13:32:56 2014 ;; MSG SIZE rcvd: 88

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >