Search Results

Search found 2912 results on 117 pages for 'amazon vpc'.

Page 58/117 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • How to move a Windows instance to a different EC2 data centre?

    - by Darren Cook
    I've a Windows EC2 instance running in one region and need to move it to another region (Tokyo and Singapore in this case). Is that even possible? What potential problems do I need to watch out for? (I found http://stackoverflow.com/questions/2181849/ec2-instance-cloning which is describing how to do it, but appears to assume Linux instances, and to assume the same data centre. Is it possible to move my keys across to another region?) I tried something similar with a Windows instance a few months ago, just trying to clone it in the same data centre, but I couldn't get it working quickly, so I had to give up and just create a fresh instance at that time. This time I've got a bit of breathing space, and want to research how to do it properly! Root Device Type: ebs Block Devices: sda1 xvdf (both are ebs, "attached", and have Delete on termination set to "no"; sda1 is the root device) The AMI is described as "Unavailable" (then an ami number).

    Read the article

  • What's the best approach when it comes to updating a production(on ec2) machine that can't go down?

    - by Ryan Detzel
    We have three main servers on ec2, web, database, and search. I logged in today to find: 77 packages can be updated. 45 updates are security updates. which scares the crap out of me so I want to update these machines asap but I'm scared to just run the updates on a live running system. Is this safe to do, what's the best approach when it comes to doing security updates on production machines?

    Read the article

  • Easiest way to get feedback from an EC2 instance

    - by Sanity
    I need to run a script on an ec2 instance once a day, and I'd like to have some easy way for it to let me know if something went wrong. I would prefer not to have to modify the original image, which is a recent version of Ubuntu, so ideally I'd like to do all setup in the script I pass to the Ubuntu instance through the ec2-run-instances command. I've considered creating a gmail account for it, and letting it send email through that - but the setup was rather involved, with certificates and such things. I've looked at using the gist API, but anything uploaded through it is public. The Google command line tool also appears quite complicated to set up. Is there some easier way to do this?

    Read the article

  • Amazon Careers website - are resumes processed in plain text format only?

    - by sapphiremirage
    The submission site has the following options: "Please upload your resume (Word Document, max size: 512 KB) OR Please copy and paste the text version of your file here", with a text box below the latter option. I went ahead and uploaded my shiny LaTeX resume (as a PDF), despite the fact that they seem to want a Word Document, and there didn't seem to be any issues. However, when I went back to edit my profile, there was no evidence that my PDF had been uploaded, other than a text version of my resume, awfully formatted and clearly stripped from the PDF, sitting in the text box below "Please copy and paste the text version of your file here". Exasperated, I did a quick and dirty copy of the text from my resume into a Word doc and uploaded that. Same result: no evidence of a file uploaded, just a stripped text version in the textbox. What I'm wondering now is, are they only going to look at the text version of my resume? If that's the case then I'm obviously going to edit it so that it looks halfway decent and doesn't contain such atrocities from the conversion as "Other Skills: LTEX". I can pretty little text files without too much effort, so this isn't that big of deal. However, my LaTeX resume is going to look better than anything I can do in plain text, so if the site is actually keeping a copy of that, then I certainly don't want to override it. Has anyone here either gone through the Amazon hiring process or interviewed candidates and know how this works? (i.e. When on site with Amazon, did the interviewers have diversely formatted resumes, or did they all look suspiciously similar)

    Read the article

  • How do I automatically start Clamz with AMZ files for Amazon MP3 downloads?

    - by Takkat
    Chromium can open downloaded files with the default application (e.g. PDF in Evince). In my setup a downloaded .AMZ (for Amazon MP3) always opened with Gedit. However I would like to have all downloaded .amz files to autromatically open with Clamz, a command line tool for downloading that works like a charm. As in Nautilus my .amz files were associated to open with Gedit too I thought it was a good idea to add a clamz.desktop file in ~/.local/share/applications (according to this answer) [Desktop Entry] Encoding=UTF-8 Name=Clamz Comment=Open AMZ files for Amazon MP3 download Exec=/usr/bin/clamz %u Terminal=True Type=Application Icon= Categories=Application; StartupNotify=true MimeType=audio/x-amzxml; NoDisplay=true This lets me choose Clamz as default application in Nautilus. But when opening an .amz file in Nautilus it still does not open with Clamz as expected but is treated as an executable text file instead (note that the executable bit is not set!). Is there any other way to make Chromium or Nautilus always open an .amz file with Clamz? Did I miss to change setting in another place?

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • Problem with openssl_get_privatekey returning false

    - by Joe Corkery
    I am trying to generate a signed url for Amazon's CloudFront service but am running into problems in that the openssl_get_privatekey function appears to be returning false and I can't quite figure out why. Here is the code (PHP) that I am using: $priv_key = file_get_contents(path_to_my_pem_file); $priv_keyid = openssl_get_privatekey($privkey); Unfortunately, everytime I try this openssl_get_privatekey fails silently and I run into errors when I try to sign with openssl_sign later on. I've tried printing out the contents of $priv_key after it has been read in and it appears to be correct. I'm running this on RHEL 5.4 using PHP 5.2.13. I've confirmed that file pem file is readable and I've also run dos2unix on it just in case (didn't work before or after). Any thoughts would be greatly appreciated as I am relatively new to both PHP and openssl.

    Read the article

  • 404 redirect with cloud storage

    - by Jeremy DeGroot
    I'm hoping to reach someone with some experience using a service like Amazon's S3 with this question. On my site we have a dedicated image server. And on this server, we have an automatic 404 redirect through Apapche so that, if a user tries to access an image that doesn't exist, they'll see a snazzy "Image Not Available" image. We're looking to move the hosting of these images to a cloud storage solution (S3 or Rackspace's CloudFiles), and I'm wondering if anyone's had any success replicating this behavior on a cloud storage service and if so how they did it.

    Read the article

  • What does the EC2 command line say when a machine won't start?

    - by OneSolitaryNoob
    When starting an instance on Amazon EC2, how would I detect a failure, for instance, if there's no machine available to fulfill my request? I'm using one of the less-common machine types and am concerned it won't start up, but am having trouble finding out what message to look for to detect this. I'm using the EC2 commandline tools to do this. I know I can look for 'running' when I do ec2-describe-instance to see if the machine is up, but don't know what to look for to see if the startup failed. Thanks!

    Read the article

  • ASP.Net State Server on EC2 not connection

    - by CountCet
    I am trying to set up an Asp.Net State Server on Amazon EC2. The single web server using this State Server is also on EC2. I've done the following things. I've added the IIS role on the State Server. I changed the value in the registry to allow connections for the service and started the aspstate service. I verified it is listening on port 42424 by checking netstat. I edited the web.config of the Web server to point to this server. I added the the tcp port to my EC2 security group and allowed it for all incoming ip's. Anything else I am not doing?

    Read the article

  • Openfire scalability question XMPP Server

    - by candoyo
    Hello! Do you guys know how well openfire scales? My users will be using the application to do normal chatting like msn no file transfer for now. We will be using Amazon's EC2 server to run the chat server we would like to support over 1 Million users in total and around 30-50K active users during peak times. Since clustering is now opensource, I though Openfire might be the way to go, how much will it cost for the coherence license or can I bipass that somehow? Also, I wanted to develop plugin for Openfire if we go with it. Any pointers on how to set up a dev env and get going would be helpful too! Thanks ya'all! :)

    Read the article

  • DB2 Transaction log is full. How to flush / clear it?

    - by Mestika
    Hi, I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment. My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log. I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped. Please, I hope someone has an answer to solve this frustrating problem Thanks - Mestika

    Read the article

  • ec2_bundle_vol fails with error LoadError

    - by Koran
    Hi, I am a newbie in amazon ec2 setup. I have now setup a machine to my taste - and I now want to bundle it. I am running the following command from the launched instance - root@domU-21-34-67-26-ED-Z4:~# ec2-bundle-vol -r i386 -d /mnt \ -p ACT-VOL -u 8940-1355-4155 -k /tmp/pk-key.pem \ -c /tmp/cert.pem -s 10240 \ -e /mnt,/root/.ssh,/home/ubuntu/.ssh ruby: No such file or directory -- /home/ubuntu/ec2tools/ec2-api-tools-1.3-46266/lib/ec2/amitools/bundlevol.rb (LoadError) The ruby version is 1.8.7. I searched internet and installed libruby1.8-extras etc too, but to no avail. I also tried running it from site_ruby (/usr/local/lib/site_ruby) - but no use. I tried installing 1.8.6 version of ruby, but was unable to find a way to do so too. Any help would be much appreciated. Thanks, K

    Read the article

  • What can I use for voluntary donations? (like Tipjoy)

    - by Ken
    There used to be a site called Tipjoy that would let me put a small "donate" button on a webpage, and users could donate small amounts (like 25c) to me easily. I think it was a pretty neat idea, since I want to have a way for people to give me money, and I don't like advertisements, and I don't update regularly enough to sell subscriptions like bloggers. I just have some simple web services and open-source program and I want an easy way for people to drop me some change if they think they're useful. I've found out that Amazon used to have a similar service, but it's also been shut down. Is there any similar web service available today? If not, what's the closest thing to offer -- a Paypal link?

    Read the article

  • How to monitor and maintain my grails application in live/production environment?

    - by fabien7474
    It is the first time I have ever launched live a website (with Grails web framework under Amazon EC2 platform and Cloud Foundry) and I realized quickly that I am not ready for monitoring and maintening correctly my application in production mode (fortunately the website is accessible to a very limited number of users) . The issues I have faced so far are: Cannot change my views. I need to redeploy my application I have no monitoring. I don't know who is connected, when do they sign in / sign out... Redploying my application (upload WAR + deploy) takes at least 30 minutes. I don't know how to restart my Tomcat server without a redeploy through Cloud Foundry ! ... So, my question is very simple: What tools (including grails plugins) and methods can you recommend me for taking me out from my current blindness?

    Read the article

  • s3 / php script looping (strace)

    - by Neil
    Anyone using the following php S3 client library? http://undesigned.org.za/2007/10/22/amazon-s3-php-class It's been working fine for me for a few days, just noticed that a script I have in place now just ends up hanging. Running this through strace, I see something like: poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 0) = 1 ([{fd=4, revents=POLLHUP}]) poll([{fd=4, events=POLLOUT}], 1, 1000) = 1 ([{fd=4, revents=POLLHUP}]) Looking at what's running, I see that it's not even getting to the point where it makes the curl call. Any thoughts? Thanks!

    Read the article

  • Using PIG with Hadoop, how do I regex match parts of text with an unknown number of groups?

    - by lmonson
    I'm using Amazon's elastic map reduce. I have log files that look something like this random text foo="1" more random text foo="2" more text noise foo="1" blah blah blah foo="1" blah blah foo="3" blah blah foo="4" ... How can I write a pig expression to pick out all the numbers in the 'foo' expressions? I prefer tuples that look something like this: (1,2) (1) (1,3,4) I've tried the following: TUPLES = foreach LINES generate FLATTEN(EXTRACT(line,'foo="([0-9]+)"')); But this yields only the first match in each line: (1) (1) (1)

    Read the article

  • Google app engine or Amazin ec2 for Restful services and direct access to datastore

    - by imran
    I'm thinking of building a Restful app on either App engine or ec2 devloped in Java. I'm interested in opinions/experience of using the two options for this. The primary purpose is to create web services to write and retrieve data through a mobile device...basically creating an API for the service I want to create. It seems to me it would be quicker and cheaper in the beginning to go with google app engine using either restlet or grails.But I also think that I could run into problems in the future when I want to so somthing more advanced and might be restricted by app engines environment. I also want to be able to do data analysis on the data in the datastore as well. It seems that with app engine this would be hard as I don't have direct access to the datastore ( in Amazon I could still have access to the underlying db if I go with MySQL ) .

    Read the article

  • PIG doesn't read my custom InputFormat

    - by Simon Guo
    I have a custom MyInputFormat that suppose to deal with record boundary problem for multi-lined inputs. But when I put the MyInputFormat into my UDF load function. As follow: public class EccUDFLogLoader extends LoadFunc { @Override public InputFormat getInputFormat() { System.out.println("I am in getInputFormat function"); return new MyInputFormat(); } } public class MyInputFormat extends TextInputFormat { public RecordReader createRecordReader(InputSplit inputSplit, JobConf jobConf) throws IOException { System.out.prinln("I am in createRecordReader"); //MyRecordReader suppose to handle record boundary return new MyRecordReader((FileSplit)inputSplit, jobConf); } } For each mapper, it print out I am in getInputFormat function but not I am in createRecordReader. I am wondering if anyone can provide a hint on how to hoop up my costome MyInputFormat to PIG's UDF loader? Much Thanks. I am using PIG on Amazon EMR.

    Read the article

  • AWS SES for bulk mail : Require email verification?

    - by weotch
    We're thinking of moving to Amazon's SES for sending bulk mail. It appears that we have a unique API call for each email we want to send. So if there are 20k emails to send, we make 20k API calls. My question is, do we need to verify these email addresses before we send to them? We have an existing database of users and I'd rather the transition to SES to be transparent to them. I noticed that SES has an API method for verifying emails. If we aren't required to verify, why would someone would use this method?

    Read the article

  • What is the cheapest non-colocation way to serve about 10 static files at a rate of 100 megabits per

    - by Mark Maunder
    I've looked at Amazon S3 and it costs roughly $4746 per month for 100 megabits/s (which translates into 31,640 Gigabytes of data transferred. That's at a rate of $0.15 per gig.) I haven't found a cheaper "cloud" option. I'm curious if there's any other cloud hosting option out there cheaper than S3. Uptime is not an issue because I can build failover for most things into the browser. e.g. I can use javascript to say "if the image didn't load then go to this other URL instead." FYI I'm currently using a colocation facility which is about 30% cheaper than S3 and I'm familiar with colo prices - so this question is really about "cloud" services and by that I mean services where I don't have to worry about the infrastructure.

    Read the article

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • Serving GZipped files from s3 using the Asset Pipeline

    - by kmurph79
    I have a Rails 3.2.3 app on Heroku and I'm using the asset_sync gem to serve my assets from s3, via these instructions. It works great, except s3 is not serving up the gzipped css/js files (just the uncompressed version). I've enabled gzip compression, to no avail: config.gzip_compression = true According to Using GZIP with html pages served from Amazon S3 I need to add meta-data to the s3 object for uploading. How would I do this in concert with the Asset Pipeline? Thank you for any help.

    Read the article

  • Why wouldn't an S3 ACL "stick"?

    - by Chris Phillips
    We would like to set an ACL to allow access to one of our buckets with a partner account. We've tested the process on a test account and everything works fine. On our production account/buckets, however, we can set the ACL and see the update but as soon as we attempt to access the bucket from the other account we get a forbidden response. Afterwards, when we look at the ACL list for the bucket, the permission is gone. We've tried using both Amazon's new S3 tool in the AWS Management Console and CloudBerry Explorer and both tools exhibit exactly the same behavior. Using the same process to update an ACL from our test account works as expected ( the ACL update "sticks" ). What would cause the ACL to not "stick"? Does anyone have any ideas on how to fix/workaround the problem?

    Read the article

  • CSS not displayed depending on page

    - by Kanjiroushi
    I have a friend that has a really strange issue with my website. When he clicks on http://www.copeo.fr/ the page displays fine but when he clicks on a link like www.copeo.fr/user/ the CSS is not applied even after a refresh. The raw html does display. I asked him to display the CSS that is hosted on amazon S3 hcopeoressources.s3.amazonaws.com/style/futurvert/style.css and it displays fine. The code validates on W3C validator so does the CSS. I am lost what could be the origin of the issue. Could it be its enterprise cache? configuration of IE7 on his machine? If it happens to someone else who could explain the issue to me, I am all hears. Thanks

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >