Search Results

Search found 19378 results on 776 pages for 'richa media and services'.

Page 144/776 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Static NAT in AWS's Virtual Private Cloud (VPC)

    - by user1050797
    Currently in a VPC with a public and a private subnet, all internet bound traffic from the private subnet could be routed via an NAT instance. The NAT instance will port address translate the packet's source IP to use the NAT instance's elastic IP, so the public server can reply to this public address. This is a PAT mechanism. My question is there a way for me to do a static NAT on my NAT instance -- Using the same NAT instance to static NAT an unassociated but reserved elastic IP to a private subnet host. This NAT instance will behave like a physical firewall doing static nat'ing for a bunch of private ip's.

    Read the article

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • How to automate an Amazon EC2 instance startup, execution of some commands and shutdown?

    - by Howiecamp
    I need to download 100 GB of files (it’s in about 150 files) within a 7 day period before they expire. The download is rate-limited by the host so it takes MUCH longer than the theoretical transfer rate based on normal Internet speeds. I have a script of curl http://curl.haxx.se/docs/manpage.html commands - one line per file. I had the idea of automatically spinning up n EC2 instances, executing the command and FTPing the files to a central location, then shutting down the machines. How would I do this? I don't care whether it's Linux or Windows.

    Read the article

  • AWS VPN Tunnel going down without traffic

    - by Asfura
    I managed to setup a site-to-site VPN connection from Amazon VPC to a company's network, and after a lot of configuration it was working fine, but now i realized that the VPN tunnel is DOWN every time there's no traffic going trough for a couple minutes. The only way that i have found to generate traffic is to reach the amazon instance from the company's network and then the tunnel goes up again. I had a cronjob doing ping every minute, but i think it should have a keepalive option somewhere, or at least a log file of the tunnels to find out what's going on. Any ideas to keep the tunnel up and/or bring it up from amazon? The firewall is a Checkpoint R75.20, it only allows one tunnel at a time for the same subnet, so i cant have both tunnels active. Thank you, any questions just ask. EDIT I forgot to add, the ping keepalive was working great (maybe generating a bit of traffic, but nothing to worry about), the connection dropped because i had to restart the instance, and it that little time it dropped me.

    Read the article

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • Importing XML into an AWS RDS instance

    - by RoyHB
    I'm trying to load some xml into an AWS RDS (mySql) instance. The xml looks like: (it's an xml dump of the ISO-3661 codes) <?xml version="1.0" encoding="UTF-8"?> <countries> <countries name="Afghanistan" alpha-2="AF" alpha-3="AFG" country-code="004" iso_3166-2="ISO 3166-2:AF" region-code="142" sub-region-code="034"/> <countries name="Åland Islands" alpha-2="AX" alpha-3="ALA" country-code="248" iso_3166-2="ISO 3166-2:AX" region-code="150" sub-region-code="154"/> <countries name="Albania" alpha-2="AL" alpha-3="ALB" country-code="008" iso_3166-2="ISO 3166-2:AL" region-code="150" sub-region-code="039"/> <countries name="Algeria" alpha-2="DZ" alpha-3="DZA" country-code="012" iso_3166-2="ISO 3166-2:DZ" region-code="002" sub-region-code="015"/> The command that I'm running is: LOAD XML LOCAL INFILE '/var/www/ISO-3166_SMS_Country_Codes.xml' INTO TABLE `ISO-3661-codes`(`name`,`alpha-2`,`alpha-3`,`country-code`,`region-code`,`sub-region-code`); The error message I get is: ERROR 1148 (42000): The used command is not allowed with this MySQL version The infile that is referenced exists, I've selected a database before running the command and I have appropriate privileges on the database. The column names in the database table exactly match the xml field names.

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • Fail rate of EBS snapshots and AMIs?

    - by user784637
    According to Amazon the fail rate of EBS volumes is: As an example, volumes that operate with 20 GB or less of modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% – 0.5%, where failure refers to a complete loss of the volume http://aws.amazon.com/ebs/ However, I was curious to know the fail rate of EBS snapshots and private AMIs that a system admin would take. Since the EBS snapshots and AMIs are stored in Amazon s3, is it safe to assume that the likelihood you cannot rollback to a previous snapshot or AMI the same likelihood that a file gets lost in s3? Amazon S3’s standard storage is: ... Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year. http://aws.amazon.com/s3/

    Read the article

  • I dont qualify for Junk Mail Reportig Program. Alternatives?

    - by Marius
    Hello there! :) I am on shared hosting and therefore does not qualify for MSN Junk Mail Reporting Progra. I am concerned about people who doesnt want my email clicking "report as junk"-button in hotmail, and understand that JMRP send me a message each times somebody clicks the button on one of my emails. I was wondering if there are any other methods I can use to get these messages to prevent mailing the same person again. Thank you for your time. Kind regards, Marius

    Read the article

  • VRF Internet Gateway Multiple External IP's 1 Internal IP to AWS

    - by user223903
    Trying to setup VRF for the first time and its not working for me even though I keep reading everything online. IP's are different to real life. I have an Internet connection which I can ping to my router in the current setup below 195.45.73.22 I have a block of ip addresses 195.45.121.0/27 I want to setup multiple VPN's to AWS so need to have multiple external ip's thus the block of IP addresses. I have setup the 2nd and 3rd IP address but can not ping them from external. Any help would be grateful. Bryan ip source-route ! ip vrf Internet rd 1:1 route-target export 1:1 route-target import 1:1 ip vrf AWSSydney1 rd 2:2 route-target export 2:2 route-target import 2:2 route-target import 1:1 ip vrf AWSSydney2 rd 3:3 route-target export 3:3 route-target import 3:3 route-target import 1:1 ip cef no ip domain lookup no ipv6 cef multilink bundle-name authenticated interface FastEthernet0/0 description Vocus Internet no ip address speed 100 full-duplex interface FastEthernet0/0.1 encapsulation dot1Q 1 native ip address 195.45.73.22 255.255.255.252 interface FastEthernet0/0.2 encapsulation dot1Q 2 ip vrf forwarding AWSSydney1 ip address 195.45.121.1 255.255.255.224 interface FastEthernet0/0.3 encapsulation dot1Q 3 ip vrf forwarding AWSSydney2 ip address 195.45.121.2 255.255.255.224 interface FastEthernet0/1 description LAN_SIDE ip address 10.0.0.5 255.255.255.0 speed 100 full-duplex no mop enabled ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 195.45.73.21 ip route vrf Internet 0.0.0.0 0.0.0.0 195.45.73.21

    Read the article

  • Replicate / SYNC / Copy thousands of files

    - by rihatum
    Windows 32BIT Box (Server OS) 4GB RAM a 800GB LUN Mapped to the above box as a local drive Around 700GB of text files (Yes text files and a few thousand word documents) nested in thousands of directories. I need to move this to a new storage and map another server to it. What would be the best way to go about it ? What I did was mapped the existing LUN to our new box and mapped a LUN from the New storage to the new box too, and tried copying (windows copy) but that wasn't good / fast enough considering the downtime. I am now looking for either a script which will do this or a utility (prefer opensource / free) to move this size of data at a good speed. 2 x 1GB Nics teamed ether channel 2GBPs Any suggestions or pointers would be off great help Thanks !

    Read the article

  • Cloudformation with Ubuntu throwing errors

    - by Sammaye
    I have been doing some reading and have come to the understanding that if you wish to use a launchConfig with Ubuntu you will need to install the cfn-init file yourself which I have done: "Properties" : { "KeyName" : { "Ref" : "KeyName" }, "SpotPrice" : "0.05", "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] }, "SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ], "InstanceType" : { "Ref" : "InstanceType" }, "UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [ "#!/bin/bash\n", "apt-get -y install python-setuptools\n", "easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-1.0-6.tar.gz\n", "cfn-init ", " --stack ", { "Ref" : "AWS::StackName" }, " --resource LaunchConfig ", " --configset ALL", " --access-key ", { "Ref" : "WorkerKeys" }, " --secret-key ", {"Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, " --region ", { "Ref" : "AWS::Region" }, " || error_exit 'Failed to run cfn-init'\n" ]]}} But I have a problem with this setup that I cannot seem to get a decent answer to. I keep getting this error in the logs: Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: config-scripts-per-once already ran once Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-boot with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-instance with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling scripts-user with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cc_scripts_user.py[WARNING]: failed to run-parts in /var/lib/cloud/instance/scripts Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[WARNING]: Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 117, in run_cc_modules#012 cc.handle(name, run_args, freq=freq)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/__init__.py", line 78, in handle#012 [name, self.cfg, self.cloud, cloudinit.log, args])#012 File "/usr/lib/python2.7/dist-packages/cloudinit/__init__.py", line 326, in sem_and_run#012 func(*args)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/CloudConfig/cc_scripts_user.py", line 31, in handle#012 util.runparts(runparts_path)#012 File "/usr/lib/python2.7/dist-packages/cloudinit/util.py", line 223, in runparts#012 raise RuntimeError('runparts: %i failures' % failed)#012RuntimeError: runparts: 1 failures Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[ERROR]: config handling of scripts-user, None, [] failed Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling keys-to-console with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling phone-home with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] __init__.py[DEBUG]: handling final-message with freq=None and args=[] Jun 15 12:02:34 ip-0 [CLOUDINIT] cloud-init-cfg[ERROR]: errors running cloud_config [final]: ['scripts-user'] I have absolutely no idea what scripts-user means and Google is not helping much here either. I can, when I ssh into the server, see that it runs the userdata script since I can access cfn-init as a command whereas I cannot in the original AMI the instance is made from. However I have a launchConfig: "Comment" : "Install a simple PHP application", "AWS::CloudFormation::Init" : { "configSets" : { "ALL" : ["WorkerRole"] }, "WorkerRole" : { "files" : { "/etc/cron.d/worker.cron" : { "content" : "*/1 * * * * ubuntu /home/ubuntu/worker_cron.php &> /home/ubuntu/worker.log\n", "mode" : "000644", "owner" : "root", "group" : "root" }, "/home/ubuntu/worker_cron.php" : { "content" : { "Fn::Join" : ["", [ "#!/usr/bin/env php", "<?php", "define('ROOT', dirname(__FILE__));", "const AWS_KEY = \"", { "Ref" : "WorkerKeys" }, "\";", "const AWS_SECRET = \"", { "Fn::GetAtt": ["WorkerKeys", "SecretAccessKey"]}, "\";", "const QUEUE = \"", { "Ref" : "InputQueue" }, "\";", "exec('git clone x '.ROOT.'/worker');", "if(!file_exists(ROOT.'/worker/worker_despatcher.php')){", "echo 'git not downloaded right';", "exit();", "}", "echo 'git downloaded';", "include_once ROOT.'/worker/worker_despatcher.php';" ]]}, "mode" : "000755", "owner" : "ubuntu", "group" : "ubuntu" } } } } Which does not seem to run at all. I have checked for the files existance in my home directory and it's not there. I have checked for the cronjob entry and it's not there either. I cannot, after reading through the documentation, seem to see what's potentially wrong with my code. Any thoughts on why this is not working? Am I missing something blatant?

    Read the article

  • Unable to terminate extra EC2 instances

    - by Deborah Cole
    I'm just setting up my AWS server & I'm trying to use the EC2 Console to terminate some extra instances that I generated via the AWS for Eclipse toolkit's New Project AWS Java Web Project utility. Unfortunately, every time I stop, then terminate such an instance via the EC2 Console, it automatically recreates & reactivates itself! I really don't want to be paying for 4 dev systems when I only need 1, so can somebody please clue me in? Please explain gently... I'm new to this environment.

    Read the article

  • AWS lighttpd: Sending a copy of requests to test.

    - by Martin
    I have a load balanced service on AWS. So the ELB evenly distributes the load across my servers. Each server is running lighttpd that does logging and forwards the requests to my service (on the same machine). I have written a new version of the service. It is installed and running on an EC2 machine test1 (basically a mirror of our current server but the new service running instead of the original) and I have done some preliminary tests that look good. But what I would like to do is mirror a fraction of incoming traffic to the new version of the service so I can do some comparisons between an original version and the new version based on real traffic. Thus I was thinking I could modify one box behind the ELB to duplicate its traffic to the test1. I was thinking I could modify the configuration of lighttpd so that each request is mirrored/duplicated. i.e. the original service keeps responding as before but a mirror request is sent to test1 but the reply is just dropped). Unfortunately I have not been able to work this out. Any ideas on how I could mirror the requests from one box to itself and test1. Or any other ideas for testing.

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Overhead of Perfmon -> direct to SQL Database

    - by StuartC
    HI All, First up, I'm a total newb at Performance Monitoring. I'm looking to set up central performance monitoring of some boxes. 2K3 TS ( Monitor General OS Perf & Session Specific Counters ) 2K8 R2 ( XenApp 6 = Monitor General OS Perf & Session Specific Counters ) File Server ( Standard File I/O ) My ultimate aim is to get as many counters/information, without impacting the clients session experience at all. Including counters specific to their sessions. I was thinking it logging directly to a SQL on another server, instead of a two part process of blg file then relog to sql. Would that work ok? Does anyone know the overhead of going straight to SQL from the client? I've searched around a bit, but havent found so much information it can be overwhelming. thanks

    Read the article

  • Amazon EC2: possible to use elastic load balancing across web servers in multiple regions based on location of client?

    - by Tony
    Related to an another question I asked. This question seems similar but I'm wondering if there are any updates. To support a single site that has users all over the world, I will create EC2 web servers in the US, Asia and Europe regions. The web server instances in the US and Asia regions will be backed by RDS replicas. Is it possible to load balance across these three regions? So when a customer from Spain goes to example.com, she should be routed to the EC2 instances in Europe region, a customer in Miami should be sent to the instance in Eastern US region, etc. Is this possible to do this with just AWS features? Are there docs on how to set this up?

    Read the article

  • GitHub push to AWS Elastic Beanstalk

    - by nute
    I am using GitHub for code management. I am using Amazon AWS Elastic Beanstalk as a server. Amazon announced that you can use Git to push code to the application server. However, to do this I'd have to let go of GitHub as they are essentially replacing the git server. Is there any way to have the best of both worlds? I don't necessarily need to "deploy" everytime I push, but I'd like to have it uploaded as a "Version", and then I can deploy the version I want anytime.

    Read the article

  • Certificate Template Missing from "Certificate Template to Issue"

    - by Adam Robinson
    I'm having a problem similar to that posted in this question: Missing Certificate template From certificate to issue The short version is that I've created a duplicate certificate template and I'm trying to add it to my domain CA so that I can issue certificates with it. However, when I go into the Certification Authority MMC and go to "Certificate Templates - New - Certificate Template To Issue", my template is missing (along with quite a number of other templates that are present in the domain). Unlike the previous question, however, my CA is running on Server 2008 R2 Enterprise. Our organization has a single DC and a single CA, so I'm not seeing where there could be propagation delay. Any ideas how to get my template to show so that I can issue certificates?

    Read the article

  • Chef-solo cannot locate an nginx recipe template

    - by crftr
    I have been recently experimenting with Chef. I thought I would attempt to rebuild my personal web server using chef-solo. It's an AWS instance running the Amazon 64bit Linux AMI. My first objective is to install nginx. I have cloned the Opscode cookbook repository, and am using their nginx cookbook. My problem appears to be that chef-solo cannot find a template after it has started the process. The command I'm using is chef-solo -j /etc/chef/dna.json dna.json { "nginx": { "user": "ec2-user" }, "recipes": [ "nginx" ] } solo.rb file_cache_path "/var/chef-solo" cookbook_path "/var/chef-solo/cookbooks" ...the output [root@ip-10-202-221-135 chef-solo]# chef-solo -j /etc/chef/dna.json /usr/lib64/ruby/gems/1.9.1/gems/systemu-2.2.0/lib/systemu.rb:29: Use RbConfig instead of obsolete and deprecated Config. [Fri, 27 Jan 2012 19:41:36 +0000] INFO: *** Chef 0.10.8 *** [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Setting the run_list to ["nginx"] from JSON [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List is [recipe[nginx]] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List expands to [nginx] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Starting Chef Run for ip-10-202-221-135.ec2.internal [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Running start handlers [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Start handlers complete. [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Missing gem 'mysql' [Fri, 27 Jan 2012 19:41:38 +0000] INFO: Processing package[nginx] action install (nginx::default line 21) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing directory[/var/log/nginx] action create (nginx::default line 23) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxensite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxdissite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[nginx.conf] action create (nginx::default line 38) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/etc/nginx/sites-available/default] action create (nginx::default line 46) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: template[/etc/nginx/sites-available/default] mode changed to 644 [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (nginx::default line 46) has had an error [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (/var/chef-solo/cookbooks/nginx/recipes/default.rb:46:in `from_file') had an error: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `rename' /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `block in mv' /usr/lib64/ruby/1.9.1/fileutils.rb:1515:in `block in fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:1531:in `fu_each_src_dest0' /usr/lib64/ruby/1.9.1/fileutils.rb:1513:in `fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:508:in `mv' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:47:in `block in action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:48:in `block in render_template' /usr/lib64/ruby/1.9.1/tempfile.rb:316:in `open' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:45:in `render_template' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:99:in `render_with_context' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:39:in `action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource.rb:440:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:45:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block (2 levels) in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `each' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:94:in `block in execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call_iterator_block' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:85:in `step' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:104:in `iterate' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:55:in `each_with_index' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:92:in `execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:76:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:312:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:160:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:192:in `block in run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `loop' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application.rb:67:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/bin/chef-solo:25:in `<top (required)>' /usr/bin/chef-solo:19:in `load' /usr/bin/chef-solo:19:in `<main>' [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Running exception handlers [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Exception handlers complete [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Stacktrace dumped to /var/chef-solo/chef-stacktrace.out [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Errno::ENOENT: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) What am I doing incorrectly?

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • Why is uploading to S3 so slow?

    - by Tom Marthenal
    I am using s3cmd to upload to S3: # s3cmd put 1gb.bin s3://my-bucket/1gb.bin 1gb.bin -> s3://my-bucket/1gb.bin [1 of 1] 366706688 of 1073741824 34% in 371s 963.22 kB/s I am uploading from Linode, which has an outgoing bandwidth cap of 50 Mb/s according to support (roughly 6 MB/s). Why am I getting such slow upload speeds to S3, and how can I improve them? Update: Uploading the same file via SCP to an m1.medium EC2 instance (SCP from my Linode to the instance's EBS drive) gives about 44 Mb/s according to iftop (any compression done by the cipher is not a factor). Traceroute: Here's a traceroute to the server it's uploading to (according to tcpdump). # traceroute s3-1-w.amazonaws.com. traceroute to s3-1-w.amazonaws.com. (72.21.194.32), 30 hops max, 60 byte packets 1 207.99.1.13 (207.99.1.13) 0.635 ms 0.743 ms 0.723 ms 2 207.99.53.41 (207.99.53.41) 0.683 ms 0.865 ms 0.915 ms 3 vlan801.tbr1.mmu.nac.net (209.123.10.9) 0.397 ms 0.541 ms 0.527 ms 4 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.400 ms 1.481 ms 1.508 ms 5 0.gi-0-0-0.pr1.tl9.nac.net (209.123.11.62) 1.602 ms 1.677 ms 1.699 ms 6 equinix02-iad2.amazon.com (206.223.115.35) 9.393 ms 8.925 ms 8.900 ms 7 72.21.220.41 (72.21.220.41) 32.610 ms 9.812 ms 9.789 ms 8 72.21.222.141 (72.21.222.141) 9.519 ms 9.439 ms 9.443 ms 9 72.21.218.3 (72.21.218.3) 10.245 ms 10.202 ms 10.154 ms 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * The latency looks reasonable, at least until the server stopped responding to ping requests.

    Read the article

  • rsync to EC2 using ssh -i

    - by isomorphismes
    I'm able to ssh -i mykey.pem to EC2. I'm able to scp -i mykey.pem to EC2. But when I try to rsync -avz -e "ssh -i mykey.pem" I get this error: Warning: Identity file mykey.pem not accessible: No such file or directory. Permission denied (publickey). rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(605) [sender=3.0.9] Any suggestions what I've done wrong?

    Read the article

  • Keeps "SSH timeout" error in our AWS instance- how do i diagnose?

    - by ming yeow
    I am befuddled by this error. We keep failing to SSH into our AWS instance, whether it is is deployment or via console. I have tried rebooting a few times, but it does not seem to be helping. Here are a couple of error messages i keep getting. connection failed for: HOST.NAME.amazonaws.com (Errno::ETIMEDOUT: Operation timed out - connect(2)) 111.222.333.444: ssh connection failed at 2010-07-02 03:39:37 I also SSHed in when it was up, and monitored "top" when ssh times out. looking at the memory logs, it does not look like any program was hogging

    Read the article

  • RemoteApp Security Warning

    - by nairware
    I have a Windows 2012 Standard x64 RemoteApps RDWeb portal where I can launch apps. We have one remote app in particular which is RDP (mstsc.exe). Whenever a user launches it, they receive three different prompts--the second one is this alert (shown below). How can I get rid of this alert? I have other RemoteApps launching as well, and they do not throw errors or alerts like this one. And they are applications with the .exe extension, so I do not understand what is so unique about the RDP RemoteApp that would cause this alert. One thing perhaps worth mentioning is this particular RDP remote app points directly to the mstsc.exe executable residing on a particular session host/terminal server (as shown in the "From" value of the warning). As such, a gateway server would not be used to load-balance and choose the RDP client launched from a session host at random. This RDP RemoteApp is explicitly associated with one particular terminal server.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >