Search Results

Search found 1351 results on 55 pages for 'aws s3'.

Page 14/55 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Large volume at /mnt on AWS instance

    - by rhaag71
    I know this is probably a somewhat 'dumb' question :) I have an AWS (small) instance and I just noticed that there is a ~150gb volume attached at /mnt, is this normal? It kinda freaked me out, I was thinking maybe someone was trying to capture whatever I mount in /mnt, there is the entry in my fstab too (and I found that others have this by googling)... the entry is as follows /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 I don't have any volumes this large in my AWS volumes section though. I was just trying to understand this and be sure that someone is not trying to 'get in'... as there are many attempts daily. Thanks

    Read the article

  • How to run sudo command with no password?

    - by aychedee
    tl;dr: How does the ubuntu user on the AWS images for Ubuntu Server 12.04 have passwordless sudo for all commands when there is no configuration for it in `/etc/sudoers'? I'm using Ubuntu server 12.04 on Amazon. I want to add a new user that has the same behaviour as the default ubuntu user. Specifically I want passwordless sudo for this new user. So I've added a new user and went to edit /etc/sudoers (using visudo of course). From reading that file it seemed like the default ubuntu user was getting it's passwordless sudo from being a member of the admin group. So I added my new user to that. Which didn't work. Then I tried adding the NOPASSWD directive to sudoers. Which also didn't work. Anyway, now I'm just curious. How does the ubuntu user get passwordless privileges if they aren't defined in /etc/sudoers. What is the mechanism that allows this?

    Read the article

  • AWS EC2: How to determine whether my EC2/scalr AMI was hacked? What to do to secure it?

    - by Niro
    I received notification from Amazon that my instance tried to hack another server. there was no additional information besides log dump: Original report: Destination IPs: Destination Ports: Destination URLs: Abuse Time: Sun May 16 10:13:00 UTC 2010 NTP: N Log Extract: External 184.xxx.yyy.zzz, 11.842.000 packets/300s (39.473 packets/s), 5 flows/300s (0 flows/s), 0,320 GByte/300s (8 MBit/s) (184.xxx.yyy.zzz is my instance ip) How can I tell whether someone has penetrated my instance? What are the steps I should take to make sure my instance is clean and safe to use? Is there some intrusion detection techinque or log that I can use? Any information is highly appreciated.

    Read the article

  • As an experiment I want to work a bit with AWS. How much might I expect to pay?

    - by dartdog
    I'm about to go to Pycon, and while I have my hosting at Webfaction one of the tutorials (JKM) asks for students to have AWS instances. I've been trying to figure out what some minimum charge examples might look like? I'll have a lamp server with Django and a requisite amount of storage but next to no traffic,,Any one have some guidance/advice? My Google searches and look here did not turn up much useful info?

    Read the article

  • Amazon sort le SDK AWS pour PHP 2, une version récrite entièrement à partir de PHP 5.3 pour optimiser l'accès à ses services Cloud

    Amazon sort le SDK AWS pour PHP 2 une version entièrement récrite à partir de PHP 5.3 pour optimiser l'accès à ses services Cloud Amazon vient de publier la nouvelle version du SDK AWS (Amazon Web Service) pour PHP. Le SDK AWS pour PHP permet aux développeurs utilisant le langage de créer des applications pouvant exploiter les services de la plateforme Cloud dont DynamoDB, Amazon Simple Storage Service (Amazon S3), Amazon Glacier et Amazon CloudFront. Le nouveau SDK AWS a été entièrement reconstruit à partir de zéro, pour tirer pleinement parti de PHP 5.3 et prendre en compte les recommandations de PHP Framework Interop Group's.

    Read the article

  • Deploying a Git server in a AWS linux instance

    - by Leroux
    I'm making a git server on my linux instance in AWS. I tried doing it using these instructions but in the end I always get stuck with a "Permission denied (publickey)" message. So here is my detailed steps, the client is my windows machine running mysysgit and the server is the AWS ubuntu instance : 1) I created user Git with a simple password. 2) Created the ssh directory in ~/.ssh 3) On the client I created ssh keys using ssh-keygen -t rsa -b 1024, they got dropped in my /Users/[Name]/.ssh directory, id_rsa and id_rsa.pub key pair was created. 4) Using notepad I copy pasted the text into newly created files on the server in the ~/.ssh directory of my Git user. ~/.ssh/id_rsa and **~/.ssh/id_rsa.pub** were copied. 5) On the server I made the authorized_hosts file using "cat id_rsa.pub authorized_hosts" (while inside the .ssh directory) 6) Now to test it, on my client machine I did ssh -v git@[ip.address] 7) Result : debug1: Host 'ip.address' is known and matches the RSA host key. debug1: Found key in /c/Users/[Name]/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/[Name]/.ssh/identity debug1: Trying private key: /c/Users/[Name]/.ssh/id_rsa debug1: Offering public key: /c/Users/[Name]/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I would appreciate any insight anyone can give me.

    Read the article

  • Cloud based backup solutions based on open standards?

    - by Rick
    I am looking for a solution to backup and consolidate important media from a couple Windows laptops and Mac laptop. I would like a solutions that based on open standards, so my data isn't trapped by proprietary formats and proprietary protocols. I would like the ability to switch clients or change providers in the future. For example, something like Jungle Disk plus S3 sounds like a great option. However, I am having trouble confirming how or if this can be setup meeting this criteria. Are there any real or de-facto standards for treating S3 as a filesystem? If so, what Windows and Mac clients support these standards?

    Read the article

  • Cloud based backup solutions based on open standards?

    - by Rick
    I am looking for a solution to backup and consolidate important media from a couple Windows laptops and Mac laptop. I would like a solutions that based on open standards, so my data isn't trapped by proprietary formats and proprietary protocols. I would like the ability to switch clients or change providers in the future. For example, something like Jungle Disk plus S3 sounds like a great option. However, I am having trouble confirming how or if this can be setup meeting this criteria. Are there any real or de-facto standards for treating S3 as a filesystem? If so, what Windows and Mac clients support these standards?

    Read the article

  • s3cmd run on command line not on cron

    - by Jonar
    Many have said that the problem is with environment but I still can't seem to solve this problem. BTW I am using Ubuntu 9.10 login as user, then sudo -s using this command: s3cmd put file s3://bucket worked! now here is the simple script intended for testing: #! /bin/bash env >/tmp/cronjob.log s3cmd put file s3://bucket issuing the command crontab -e * * * * * /opt/script 2>&1 | logger Then using tail to syslogs Dec 3 23:22:01 ubuntu CRON[10795]: (root) CMD (/opt/script 2&1 | logger) But by verifying it on s3Fox Organizer, the file is not uploaded. (I tried changing the #! /bin/sh (no effect), putting crons on /etc/crontab (no effect), setting HOME=/home/user (no effect) What are other options to try? Or other ways to debug this problem. Thanks

    Read the article

  • Running a Mongo Replica Set on Azure VM Roles

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/15/running-a-mongo-replica-set-on-azure-vm-roles.aspxSetting up a MongoDB Replica Set with a bunch of Azure VMs is straightforward stuff. Here’s a step-by-step which gets you from 0 to fully-redundant 3-node document database in about 30 minutes (most of which will be spent waiting for VMs to fire up). First, create yourself 3 VM roles, which is the minimum number of nodes you need for high availability. You can use any OS that Mongo supports. This guide uses Windows but the only difference will be the mechanism for starting the Mongo service when the VM starts (Windows Service, daemon etc.) While the VMs are provisioning, download and install Mongo locally, so you can set up the replica set with the Mongo shell. We’ll create our replica set from scratch, doing one machine at a time (if you have a single node you want to upgrade to a replica set, it’s the same from step 3 onwards): 1. Setup Mongo Log into the first node, download mongo and unzip it to C:. Rename the folder to remove the version – so you have c:\MongoDB\bin etc. – and create a new folder for the logs, c:\MongoDB\logs. 2. Setup your data disk When you initialize a node in a replica set, Mongo pre-allocates a whole chunk of storage to use for data replication. It will use up to 5% of your data disk, so if you use a Windows VM image with a defsault 120Gb disk and host your data on C:, then Mongo will allocate 6Gb for replication. And that takes a while. Instead you can create yourself a new partition by shrinking down the C: drive in Computer Management, by say 10Gb, and then creating a new logical disk for your data from that spare 10Gb, which will be allocated as E:. Create a new folder, e:\data. 3. Start Mongo When that’s done, start a command line, point to the mongo binaries folder, install Mongo as a Windows Service, running in replica set mode, and start the service: cd c:\mongodb\bin mongod -logpath c:\mongodb\logs\mongod.log -dbpath e:\data -replSet TheReplicaSet –install net start mongodb 4. Open the ports Mongo uses port 27017 by default, so you need to allow access in the machine and in Azure. In the VM, open Windows Firewall and create a new inbound rule to allow access via port 27017. Then in the Azure Management Console for the VM role, under the Configure tab add a new rule, again to allow port 27017. 5. Initialise the replica set Start up your local mongo shell, connecting to your Azure VM, and initiate the replica set: c:\mongodb\bin\mongo sc-xyz-db1.cloudapp.net rs.initiate() This is the bit where the new node (at this point the only node) allocates its replication files, so if your data disk is large, this can take a long time (if you’re using the default C: drive with 120Gb, it may take so long that rs.initiate() never responds. If you’re sat waiting more than 20 minutes, start another instance of the mongo shell pointing to the same machine to check on it). Run rs.conf() and you should see one node configured. 6. Fix the host name for the primary – *don’t miss this one* For the first node in the replica set, Mongo on Windows doesn’t populate the full machine name. Run rs.conf() and the name of the primary is sc-xyz-db1, which isn’t accessible to the outside world. The replica set configuration needs the full DNS name of every node, so you need to manually rename it in your shell, which you can do like this: cfg = rs.conf() cfg.members[0].host = ‘sc-xyz-db1.cloudapp.net:27017’ rs.reconfig(cfg) When that returns, rs.conf() will have your full DNS name for the primary, and the other nodes will be able to connect. At this point you have a working database, so you can start adding documents, but there’s no replication yet. 7. Add more nodes For the next two VMs, follow steps 1 through to 4, which will give you a working Mongo database on each node, which you can add to the replica set from the shell with rs.add(), using the full DNS name of the new node and the port you’re using: rs.add(‘sc-xyz-db2.cloudapp.net:27017’) Run rs.status() and you’ll see your new node in STARTUP2 state, which means its initializing and replicating from the PRIMARY. Repeat for your third node: rs.add(‘sc-xyz-db3.cloudapp.net:27017’) When all nodes are finished initializing, you will have a PRIMARY and two SECONDARY nodes showing in rs.status(). Now you have high availability, so you can happily stop db1, and one of the other nodes will become the PRIMARY with no loss of data or service. Note – the process for AWS EC2 is exactly the same, but with one important difference. On the Azure Windows Server 2012 base image, the MongoDB release for 64-bit 2008R2+ works fine, but on the base 2012 AMI that release keeps failing with a UAC permission error. The standard 64-bit release is fine, but it lacks some optimizations that are in the 2008R2+ version.

    Read the article

  • How can I set paperclip's storage mechanism based on the current Rails environment?

    - by John Reilly
    I have a rails application that has multiple models with paperclip attachments that are all uploaded to S3. This app also has a large test suite that is run quite often. The downside with this is that a ton of files are uploaded to our S3 account on every test run, making the test suite run slowly. It also slows down development a bit, and requires you to have an internet connection in order to work on the code. Is there a reasonable way to set the paperclip storage mechanism based on the Rails environment? Ideally, our test and development environments would use the local filesystem storage, and the production environment would use S3 storage. I'd also like to extract this logic into a shared module of some kind, since we have several models that will need this behavior. I'd like to avoid a solution like this inside of every model: ### We don't want to do this in our models... if Rails.env.production? has_attached_file :image, :styles => {...}, :storage => :s3, # ...etc... else has_attached_file :image, :styles => {...}, :storage => :filesystem, # ...etc... end Any advice or suggestions would be greatly appreciated! :-)

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • Drupal Sites Backup and Restore to Amazon S3

    - by Ngu Soon Hui
    There are modules written for database backup and files backup, but what I want is a complete backup to Amazon S3 or other cloud platforms, for both the data, and the sites. Currently as it stands, I have to separately and manually backup the two. Is there any module/tool/already-written-script that allows me to do that?

    Read the article

  • Configuring DNS on my Amazon AWS [closed]

    - by Ricardo
    So, I have an AWS EC2, and I need to configure a dns server, I have a Ubuntu 11.04 and webmin is configured, I have a domin point to my ip. So, know I need to redirect my domain to my ip and configure BIND dns server? What configuration I have to do to redirect my domain account to my ip and create my own dns server? I see some videos on youtube but, i don´t know what is the best of for me. Thank´s for any help.

    Read the article

  • Problems migrating an EBS backed instance over AWS Regions

    - by gshankar
    Note: I asked this question on the EC2 forums too but haven't received any love there. Hopefully the ServerFault community will be more awesome. The new AWS Sydney region opening up is something that we've been waiting for for a long time but I'm having a lot of trouble migrating our instances over from N. California. I managed to migrate 1 instance over using CloudyScripts to move a snapshot and then firing up a new instance in the Sydney region. This was a very new instance so both the source and destination were running on a Ubuntu 12.04 LTS server and I had no issues there. However, the rest of our instances are all Ubuntu 10.04 LTS and with these, I'm having a lot of problems. I've tried following: 1- following the AWS whitepaper on moving instances which was given to us at the recent Customer Appreciation Day in Sydney where the new region was launched. The problem with this approach was with the last step (Step 19) here you register the image: ec2-register -s snap-0f62ec3f -n "Wombat" -d "migrated Wombat" --region ap-southeast-2 -a x86_64 --kernel aki-937e2ed6 --block-device-mapping "/dev/sdk=ephemeral0" I keep getting this error: Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' does not exist which I think is due to the kernel_id not existing in the Sydney region? 2- Using CloudyScripts to move a snapshot and then creating a new volume and attaching to a new instance in Sydney This results in the instance just hanging on boot and failing the status checks. I can't SSH in or look at the server log I suspect that my issue is with finding the right kernel_id for the volume in the new region. However I can't seem to work out how to go about finding this kernel_id, the ones I've tried (from the original instance) don't result in the Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' error and any other kernel_id just won't boot. I've tried both 12.04 and 10.04 versions of Ubuntu. Nothing seems to work, I've been banging my head against a wall for a while now, please help! New (broken) instance i-a1acda9b ami-9b8611a1 aki-31990e0b Source instance i-08a6664e ami-b37e2ef6 aki-937e2ed6 p.s. I also tried following this guide on updating my Ubuntu LTS version to 12.04 before doing the migration but it didn't seem to work either, still getting stuck on updating the kernel_id http://ubuntu-smoser.blogspot.com.au/2010/04/upgrading-ebs-instance.html

    Read the article

  • Ubuntu Software RAID 0 on AWS Does Not Survive Reboot

    - by Eric J.
    I'm experimenting with creating a software RAID 0 device from 4 EBS volumes on Ubuntu 9.10 running at Amazon AWS following this guide: http://alestic.com/2009/06/ec2-ebs-raid The device appears (and according to SysBench is 3.5x faster than a regular attached EBS volume). Problem is, when I reboot the instance, all files on the RAID device are gone. The device is available and mounted where expected, but contains no files. I am able to write new files to it, which survive until the next reboot.

    Read the article

  • Redirecting specific traffic to amazon AWS

    - by yoav r
    My server has recieved sudden increase in the (read) web traffic, requesting many map image tiles, and apache cannot handle it. Apache cannot even handle the redirections! The average load I get in my CentOS machine is more then 200.. Is there some software out there that can redirect SOME of the traffic, such as only the traffic from specific directory (such as http://example.com/maptiles/abc.png) to a different address (sucha as http://s3.amazonaws.com/mytiles/abc.png) ? can this be done by HAProxy?

    Read the article

  • Providing a static IP for resources behind AWS Elastic Load Balancer (ELB)

    - by tharrison
    I need a static IP address that handles SSL traffic from a known source (a partner). Our servers are behind an AWS Elastic Load Balancer (ELB), which cannot provide a static IP address; many threads about this here. My thought is to create an instance in EC2 whose sole purpose in life is to be a reverse proxy server having it's own IP address; accepting HTTPS requests and forwarding them to the load balancer. Are there better solutions?

    Read the article

  • How to change default user (ubuntu) via CloudInit on AWS

    - by Gui Ambros
    I'm using CloudInit to automate the startup of my instances on AWS. I followed the (scarce) documentation available at http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/annotate/head%3A/doc/examples/cloud-config.txt and examples on /usr/share/doc/cloud-init, but still haven't figured out how to change the default username (ubuntu, id:1000). I know I can create a script to manually delete the default ubuntu and add my user, but seems counter intuitive given that CloudInit exist exactly to automate the initial setup. Any ideas?

    Read the article

  • Disable MOUSE wakeup when doing suspend on UBUNTU

    - by Shadyabhi
    When I do SUSPEND on ubuntu, in order to wake up, i can just move the mouse and the computer will wake up. But, I dont want that the computer is waked up when I move my mouse. How can I do that? My /proc/acpi/wakeup file:- shadyabhi@shadyabhi-desktop:~$ cat /proc/acpi/wakeup Device S-state Status Sysfs node SLPB S4 *enabled P32 S4 disabled pci:0000:00:1e.0 UAR1 S4 disabled pnp:00:09 ILAN S4 disabled pci:0000:00:19.0 PEGP S4 disabled PEX0 S4 disabled pci:0000:00:1c.0 PEX1 S4 disabled pci:0000:00:1c.1 PEX2 S4 disabled pci:0000:00:1c.2 PEX3 S4 disabled pci:0000:00:1c.3 PEX4 S4 disabled pci:0000:00:1c.4 PEX5 S4 disabled UHC1 S3 disabled pci:0000:00:1d.0 UHC2 S3 disabled pci:0000:00:1d.1 UHC3 S3 disabled pci:0000:00:1d.2 UHC4 S3 disabled EHCI S3 disabled pci:0000:00:1d.7 EHC2 S3 disabled pci:0000:00:1a.7 UH42 S3 disabled pci:0000:00:1a.0 UHC5 S3 disabled pci:0000:00:1a.1 UHC6 S3 disabled pci:0000:00:1a.2 AZAL S3 disabled pci:0000:00:1b.0 shadyabhi@shadyabhi-desktop:~$

    Read the article

  • Data Structure Behind Amazon S3s Keys (Filtering Data Structure)

    - by dimo414
    I'd like to implement a data structure similar to the lookup functionality of Amazon S3. For those of you who don't know what I'm taking about, Amazon S3 stores all files at the root, but allows you to look up groups of files by common prefixes in their names, therefore replicating the power of a directory tree without the complexity of it. The catch is, both lookup and filter operations are O(1) (or close enough that even on very large buckets - S3's disk equivalents - both operations might as well be O(1))). So in short, I'm looking for a data structure that functions like a hash map, with the added benefit of efficient (at the very least not O(n)) filtering. The best I can come up with is extending HashMap so that it also contains a (sorted) list of contents, and doing a binary search for the range that matches the prefix, and returning that set. This seems slow to me, but I can't think of any other way to do it. Does anyone know either how Amazon does it, or a better way to implement this data structure?

    Read the article

  • What is the cheapest non-colocation way to serve about 10 static files at a rate of 100 megabits per

    - by Mark Maunder
    I've looked at Amazon S3 and it costs roughly $4746 per month for 100 megabits/s (which translates into 31,640 Gigabytes of data transferred. That's at a rate of $0.15 per gig.) I haven't found a cheaper "cloud" option. I'm curious if there's any other cloud hosting option out there cheaper than S3. Uptime is not an issue because I can build failover for most things into the browser. e.g. I can use javascript to say "if the image didn't load then go to this other URL instead." FYI I'm currently using a colocation facility which is about 30% cheaper than S3 and I'm familiar with colo prices - so this question is really about "cloud" services and by that I mean services where I don't have to worry about the infrastructure.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >