Search Results

Search found 2856 results on 115 pages for 'amazon beanstalk'.

Page 52/115 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Mount /tmp in /mnt on EC2

    - by Claudio Poli
    I was wondering what is the best way to mount the /tmp endpoint in the ephemeral storage /mnt on an EC2 instance and give the ubuntu user default write permissions. Some suggest editing /etc/rc.local this way: mkdir -p /mnt/tmp && mount --bind -o nobootwait /mnt/tmp /tmp However that doesn't work for me. I tried editing the fstab: /dev/xvdb /tmp auto defaults,nobootwait,comment=cloudconfig 0 2 and giving it a umask=0777, however it doesn't work because of cloudconfig. I'm using Ubuntu 12.04.

    Read the article

  • Which EC2 instance best for Chef server?

    - by hakunin
    I want to setup chef server as cheaply as possible, while leaving it enough room to run without crashing. The only article I found on the subject warned that RabbitMQ would crash on a micro instance due to insufficient memory. The question is: what's the cheapest EC2 instance that can run chef server reliably, considering that I don't use CouchDB or RabbitMQ for anything else in my app, so would probably have to set them up exclusively for chef server on that same instance.

    Read the article

  • Is there the equivalent of cloud computing for modems?

    - by morpheous
    I asked this question on SF, and someone recommended that I ask it here - (I don't think I have enough points to move a question from SF to SO - and in any case, I don't know how to do it - so here is the question again): I am interested in the concept of PAAS (platform as a service). However, all talk about SAAS/PAAS seems to focus on only the computer itself - not its peripherals. Is it possible to 'outsource' modems as a resource - so that an app running remotely can pump data to a modem in the cloud? As a bit of background to the question, a group of us are thinking of starting a company that offers similar services to companies like twilio etc - but I want to 'outsource' both the computing hardware (thats PAAS - the easy bit) and the modems (thats what I cant seem to find any info on). Does anyone know if modems can be bundled as part of a PAAS service? - alternatively, is there a way that an application running on one computer can communicate (i.e. pump data) to a remote modem residing on another machine?. I assume I can come up with some protocol over UDP or TCP - but there is no point reinventing the wheel - if such a protocol like that already exists (or if it some open source software allows one to do this). Any suggestions on how to solve this problem?

    Read the article

  • Problems to connect Java visualVM to a EC2-instance

    - by kasten
    I'm trying to profile a AWS EC2 instance via visualVM. The instance is in a securitygroup which allows all connections and i'm runing jstatd with a grant codebase "file:${java.home}/../lib/tools.jar" { permission java.security.AllPermission; }; policy on it. When i try to connect from my local machine with visulVM nothing happens. When i use jps i get the following response $ jps -l -m -v rmi://ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com Error communicating with remote host: Connection refused to host: xxx.xxx.xxx.xxx; nested exception is: java.net.ConnectException: Connection timed out But i can ssh into the instance and use jps locally. Has anyone a pointer in which direction i can debug further?

    Read the article

  • How to configure squid for retrieving (and caching) directly my static resources?

    - by fabien7474
    I have an Apache/Tomcat/Spring tc Server running on CentOS EC2 VM. I would like to install squid on the same machine as a proxy for retrieving (directly i.e. without forwarding the request to Apache/Tomcat) and caching static content ONLY identified by URIs : /images, /css or /js. Other URIs should be forwarded to the normal Web Server and not cached. Since I am a newbie, I didn't find from squid documentation how to configure squid for this desired behavior (and if it is even possible). Could you please help me and tell me how should I configure squid for this purpose? Thank you.

    Read the article

  • Apache taking up a lot of CPU while running request-tracker4

    - by bhowmik
    I am trying out a request-tracker installation on an EC2 micro instance. The specs for the micro instance are as follows 1) Ubuntu 12.04 64bit, 613MB RAM, 8GB Hard Drive 2) Running request-tracker 4.0.4 from the repository, perl 5.14.2, Apache2, MySQL5 3) Request-tracker4.0.4 running with mod_perl2 and Worker mpm 4) Apache configured with Worker MPM. Config snippet given below Timeout 150 KeepAlive On MaxKeepAliveRequests 60 KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> Now when I start Apache2 it works fine for some time and after a while the CPU load shoots up to 99% or more. Usually it is one or more Apache processes doing this. I've tried a to modify the worker module configuration without any luck. The log files for both Apache2 and request-tracker4 are set to log debug messages and don't show anything to indicate what could be causing this. The system gets a maximum of 5 users at any given time and usually (90% of the time) it is just 2. I've just installed it and we only have 20 tickets in the database. I don't think its the memory thats causing the issue since the server isn't swapping or even close to it and I hardly see the memory usage go up. Would appreciate any pointers on how to go about troubleshooting this. In case it helps I've also tried this out a similar installation on a small instance (Identical settings except RAM bumped upto 1.7GB) and I still see the issue.

    Read the article

  • EBS+RAID10+XFS slower than EBS+RAID10+EXT3 using MySQL?

    - by Johann Tagle
    We're currently using EC2 with 16 EBS volumes in RAID10 configuration for our MySQL data. I know some people don't recommend to put EBS volumes to RAID but that's not what I'm concerned about at the moment. Current format is ext3, but we're experimenting with moving to xfs, given many reports that it is faster. However, we're actually experiencing a performance degradation when the partition was converted to xfs - a benchmark run with inserts, updates, selects and deletes was more than 10 seconds slower using xfs. Any idea what could be the problem? Below is the fstab entry (really only changed ext3 to xfs). Database tables are innodb and we are using innodb_file_per_table. /dev/mapper/vg_data-lv_data /data xfs noatime 0 0 Thanks.

    Read the article

  • Assign PowerShell script to run at startup using PowerShell on Window Server 2012

    - by James Toyer
    I'm trying to write a PowerShell script that will run when a Windows 2012 instance is created on AWS using the configuration tools provided by AWS. My problem is that I want to change the name of the machine once it has started up, restart the machine and carry on set up process after. The main reason for this is that one of the applications, Boundary, installed in the set up process takes the name of the server when first installed. It is then doesn't seem possible to change it's name in their portal. Ideally I would have two PowerShell scripts, one to start the set up process, initialised through AWS and another that runs the first time the machine restarts. This second script would ideally be queued to run on the next start by the initial set up script. So I guess my question are: Is this possible? How would I go about doing this. My Google foo is letting me down here so any answers would be appreciated.

    Read the article

  • s3fs not maintaining uid/gid

    - by Publiccert
    I'm mounting s3 using s3fs with the following command: sudo /usr/local/bin/s3fs -o use_cache=/tmp,uid=1000,gid=1000,allow_other svn.domain.com /svn Allow_other is confirmed to be working along with cache. However, no variation/different placement of the uid/gid's is having any affect in the meta data on S3. Both the uid and gid come up as '0' in S3. I have created a user called svn with uid of 1000 to see if that would fix the problem. No luck.

    Read the article

  • Cloud based backup solutions based on open standards?

    - by Rick
    I am looking for a solution to backup and consolidate important media from a couple Windows laptops and Mac laptop. I would like a solutions that based on open standards, so my data isn't trapped by proprietary formats and proprietary protocols. I would like the ability to switch clients or change providers in the future. For example, something like Jungle Disk plus S3 sounds like a great option. However, I am having trouble confirming how or if this can be setup meeting this criteria. Are there any real or de-facto standards for treating S3 as a filesystem? If so, what Windows and Mac clients support these standards?

    Read the article

  • A decent S3 bucket manager for Ubuntu

    - by Luke
    I'm looking for a decent S3 bucket manager for Ubuntu (Gnome). I prefer it to integrate with Nautilus so it will look like just any other drive (a la WebDAV) but so far I haven't been able to find anything that I'd like to use on a daily basis. What bucket managers do you use for Ubuntu or what bucket manager would you recommend? UPDATE: S3FS seems to be what I'd really want to use since it lets me integrate my buckets directly into my file-system. However, when trying S3FS I do not get the impression that it's ready for prime time. I'm stunned by the fact that there are no decent bucket managers out there for Ubuntu/Gnome, guess I have to build it myself...

    Read the article

  • Connect to RDS inside a VPC using Opsworks located in another VPC

    - by Consuelo Merino
    I have a RDS instance (mysql) inside a VPC called vpc-a (10.0.0.0/16). This instance is private, it can only be accessed from vpc-a. We created a stack on opsworks inside another VPC called vpc-b (10.1.0.0). We want to connect opsworks to the RDS but it doesn't work. It refuses to connect. I tried adding said subnet to the RDS security group. Also read a lot of documentation but I haven't stumbled across the answer. Any help would be greatly appreciated.

    Read the article

  • Nginx proxy to s3 bucket gets 400 Invalid Argument

    - by elssar
    I have a Django app in which I serve media files through an nginx proxy to s3. The relevant python code response = HttpResponse() response['X-Accel-Redirect'] = '/s3_redirect/%s' % filefield.url.replace('http://', '') response['Content-Disposition'] = 'attachment; filename=%s' % filefield.name return response The nginx block for the internal redirect is location ~* ^/s3_redirect/(.*) { internal; set $full_url http://$1; proxy_pass $full_url; And the request logged by s3 is. REST.GET.OBJECT <media file> "GET <media file>" 400 InvalidArgument 354 - 4 - "http://<referer>" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1" - I, for the life of me, can't figure out what's wrong. The url send to nginx by the app is valid, it works in the browser. And nginx is sending a request to s3.

    Read the article

  • Hosting django backend for iPhone / Android app

    - by Ashok Fernandez
    I am looking to make an iPhone / Android app for my university using the Appcelerator Titanium framework. The app will rely heavily on a server backend which will pull information from other sites, figuring out what is relevant to the user then deliver the content. Some of the information is individual to the user (calendar data), other bits are updates frequently but are shared (bus timetables) and others are static and the same for everyone (magazine articles). I was going to use django as I am fairly proficent in python so I thought it would save time. My question is, which hosting services do you recommend to host the server backend? I am expecting about 9000 people to use the app with very random spikes in traffic, but unfortunately I have very little to go on at this stage. I have heard a lot about Webfaction, is it suitable for something like this or am I likely to need something bigger? I don't really want to fork out for a VPS at this stage. What about Amazons EC2? Would that be more suitable than Webfaction? Sorry for the fairly open ended question, Im sort of new to this so I open to all suggestions.

    Read the article

  • Elastic Load Balancer & SSL termination

    - by Aaron Scruggs
    I am setting up a Rails app on AWS that: 1) all traffic must ssl encrypted 2) will highly fluctuate in traffic on a weekly basis 3) will by maintained by someone that is a stronger coder than sysadmin, but will be responsible for both I am thinking that SSL termination on an elastic load balancer backed by small ec2 instances running nginx and unicorn A small subset of the requests will take longer than 10s, because of this I am also debating using 'thin' instead of 'unicorn'. My question is this: Is this sane? I am stepping into a quagmire of cost, maintainability, security or performance problems?

    Read the article

  • Should I have a heroku worker dyno for poll a AWS SQS?

    - by Luccas
    Im confusing about where should I have a script polling an Aws Sqs inside a Rails application. If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance. And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it? The script code: queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue.poll(:idle_timeout => 20) do |msg| # code here end I need help!! Thanks

    Read the article

  • AWS VPC public web application connecting to database via VPN

    - by Chris
    What I am trying to do is set up a web application that is public facing but makes calls to a database that is on an internal network. I have been trying to set up an AWS VPC with a public subnet, private subnet, and hardware VPN access but I can't seem to get it to work. Can someone help me understand what the process flow here should be? My understanding is that I need a public subnet to handle the website requests and then a private subnet to connect to the VPN but what I do not understand is how to send requests down the chain and get the response. Basically what I am asking is how can I query the database via VPN from that public website? I've tried during rout forwarding but I can't successfully complete the process. Does anyone have any advice on something I can read on this subject or an FAQ on setting something like this up? Is it even possible? I'm out of my league here, this is not my area of expertise but I'm being asked to solve this problem. Any help would be appreciated. Thanks

    Read the article

  • Using a script that uses Duplicity + S3 excluding large files

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if I copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • EC2-like virtualisation platform with non-RDP access

    - by code'
    I'm looking into setting up a few small VMs. The trouble is the software I intend to install on them (Cisco VPN Client) blocks out networking (other than to the target VPN destination...) with no workaround. This means that Remote Desktop or other methods of connecting to VMs that go via the Internet (e.g. GoToMyPC, LogMeIn) are a non-starter. What I'm really looking for is an EC2-like platform but which gives direct access to the VM through (for example) Hyper-V Manager. Sadly the only way they all seem to offer to remote control the VMs is direct access via Remote Desktop, whereas I need to be one layer above that (if that makes sense). A viable alternative would be to run virtualisation software within a Windows EC2 instance; obviously hardware virtualisation is impossible but I wonder if there are any software virtualisation platforms that could be run and that would work. Does anyone know if something like this exists/is possible? Thanks! C

    Read the article

  • rsync to EC2: Identity file not accessible

    - by Richard
    I'm trying to rsync a file over to my EC2 instance: rsync -Paz --rsh "ssh -i ~/.ssh/myfile.pem" --rsync-path "sudo rsync" file.pdf [email protected]:/home/ubuntu/ This gives the following error message: Warning: Identity file ~/.ssh/myfile.pem not accessible: No such file or directory. [email protected]'s password: The pem file is definitely located at the path ~/.ssh/myfile.pem, though: vi ~/.ssh/myfile.pem shows me the file. If I remove the remote path from the very end of the rsync command: rsync -Paz --rsh "ssh -i ~/.ssh/myfile.pem" --rsync-path "sudo rsync" file.pdf [email protected] Then the command appears to work... building file list ... 1 file to consider file.pdf 41985 100% 8.79MB/s 0:00:00 (xfer#1, to-check=0/1) sent 41795 bytes received 42 bytes 83674.00 bytes/sec total size is 41985 speedup is 1.00 ...but when I go to the remote server, nothing has actually been transferred. What am I doing wrong?

    Read the article

  • EC2 instance is blocking all outbound connections, how to diagnose/fix?

    - by Fraggle
    My EC2 instance is blocking all outbound connections. wget http://www.google.com ==> Hangs ping google.com ==>hangs ssh user@anyserver ==>hangs I ran : sudo iptables -F to eliminate all rules to no avail AWS Management console shows Security Group for that instance has Inbound rule allowing ssh and port 80. Can't find anything about Outbound rules there. Rebooted instance, no change. If anyone knows how to diagnose or fix, please help. Adding info: [ec2-user@ip-10-112-62-73 ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 12:31:3D:06:31:BB inet addr:10.112.62.73 Bcast:10.112.63.255 Mask:255.255.254.0 inet6 addr: fe80::1031:3dff:fe06:31bb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1933 errors:0 dropped:0 overruns:0 frame:0 TX packets:1764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:164075 (160.2 KiB) TX bytes:343256 (335.2 KiB) Interrupt:9 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:672 (672.0 b) TX bytes:672 (672.0 b) [ec2-user@ip-10-112-62-73 ~]$ ip route show 10.112.62.0/23 dev eth0 proto kernel scope link src 10.112.62.73 default via 10.112.62.1 dev eth0

    Read the article

  • NIC disabled in AWS instance

    - by Elad Lachmi
    Nothing makes you feel dumber than disabling a NIC while in RDP, but here I am :) I have mounted the volume on another instance and tried editing the registry. I have tried enabling auto logon and using runonce to run a netsh command to enable the NIC, but that does not work. I read something about enabling the NIC through the registry directly, but have had no luck in doing this. Has anyone dealt with this type of issue? I'm going nuts! Thank you!

    Read the article

  • Hibernating Ubuntu on EC2 instance?

    - by Dan
    I've got an application I'd like to move to EC2. It will likely spend more than half the day completely dormant, so I'm trying to come up with a good solution for starting and stopping it as needed. It takes a few minutes to start up from nothing, so it would be nice if i could hibernate the OS for quicker resumes. I've seen a couple forum discussions on the topic of hibernation within EC2, but never anything conclusive. Has anyone found a working solution to this, or at least some resources that could help me out?

    Read the article

  • Can't connect to EC2 instance Permission denied (publickey)

    - by Assad Ullah
    I got this when I tried to connect my new instace (UBUNTU 12.01 EC2) with my newly generated key sh-3.2# ssh ec2-user@**** -v ****.pem OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to **** [****] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /var/root/.ssh/id_rsa type -1 debug1: identity file /var/root/.ssh/id_rsa-cert type -1 debug1: identity file /var/root/.ssh/id_dsa type -1 debug1: identity file /var/root/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '****' is known and matches the RSA host key. debug1: Found key in /var/root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /var/root/.ssh/id_rsa debug1: Trying private key: /var/root/.ssh/id_dsa debug1: No more authentication methods to try.

    Read the article

  • Redirect some URL requests to CloudFront and the rest direct to the normal server?

    - by indiehacker
    Say I have two types of URL requests that must be handled by my REST API: http://query.restapi.com/image.png?apikey=abc123 http://query.restapi.com/2.0/<apiKey>/resource.json?from=umi.us_census00.state_geometry Is it possible to redirect only some URL requests for static images (ie., regex: *.png?.*) to take advantage of CloudFront's caching and have the rest of the requests go directly to the normal EC2 server (or at least take a speedier indirect route to the normal EC2 server?). Perhaps the added request time for the misses to CloudFront is irrelevant to worry about? Or perhaps my situation is not best to use for CloudFront? I understand I will need to make DNS change where the current URL requests having http://query.restapi.com/some.png?apikey=0123 get redirected to http://d1234.cloudfront.net/some.png, but I am hoping there is some way for just redirecting static .png requests to take advantage of CloudFront?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >