Search Results

Search found 3025 results on 121 pages for 'amazon ec2'.

Page 56/121 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Kindle 2 and PDFs in landscape

    - by doronkatz
    Hi guys, I am looking at getting a Kindle 2, read a lot about the PDF support (or lack off) and wanted to ask someone who has a kindle, a question. If you read a pdf in landscape mode, does it shrink the text to have it all in the one screen, or does it increase font size and split it into two or more pages. I have another reader, the iRiver Story and it does that, splits it into multiple pages thus making it readable. I know you can't zoom or anything like that in portrait view (i assume) I know you will say stick with iRiver, but the make of the kindle is a lot better (metallic back) and its useful to have a hybrid amazon book/pdf reader in one.

    Read the article

  • SPF problems with Google Apps

    - by mahle
    I currently have an SPF record with a hostname of @ that is: v=spf1 mx ip4:x.x.x.243/32 include:_spf.google.com include:amazonses.com ~all I also have another record of" spf2.0/pra mx ip4:x.x.x.243/32 include:_spf.google.com include:amazonses.com ~all We have had a lot of email being bounced back because of spam and now when I go to http://www.kitterman.com/spf/validate.html? and check the "Does my domain already have an SPF record? What is it? Is it valid?" it says no spf record exists. However, when I send an email using our Amazon SES script and check the headers it says it passes the SPF test. Is there something I am missing? Do I need to place that text in quotes ""? Any help would be greatly apprecaited.

    Read the article

  • Email solution for new domain [on hold]

    - by user196286
    I registered my domain at NameCheap, and have it hosted now at AWS Route 53. However, I'm at a loss for how now to set up sending transactional email. I hear Amazon SES is a good solution, but that requires me to verify my e-mail. I don't have email set up (no e-mail addresses at my domain nor a email client to receive the email verification). As an added wrinkle, I have my sitename.com bucket redirecting to www.sitename.com, and I'm hosting my site on Route 53 using www.sitename.com. However, does this screw things up if I need to switch my MX records since perhaps the 'www' throws things off (would it point to mail.www.sitename.com)?

    Read the article

  • How to set up JBoss with S3_Ping on AWS?

    - by Jonik
    I'm looking into running clustered JBoss on Amazon Web Services (AWS). I'd like to try out S3_PING, i.e. making JBoss use an S3 bucket for dynamic node discovery etc, since no multicast is available. I found a piece of example config XML related to S3_Ping, but I'm not sure where in JBoss installation you're supposed to configure this. So, what JBoss config files would I need to tweak to get S3_PING working? Can anyone point me to a more complete example? JBoss 5.1.0 GA. (This is probably more a JGroups/JBoss question than anything else. I've already got the S3 bucket for this set up, so no problem there.)

    Read the article

  • Two threads in a rails initializer file seems to not run them properly

    - by Luccas
    Initially I was using one thread to listen a queue from amazon and works perfectly. aws.rb Thread.new do queue1 = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue1.poll do |msg| ... but now I appended another thread to listen another queue: ... Thread.new do queue2 = AWS::SQS::Queue.new(SQSADDR['my_another_queue']) queue2.poll do |msg| ... and now it seems to not work. Only the last one receives response... I have to join the threads? I can't understand What is going on?

    Read the article

  • What is the cheapest non-colocation way to serve about 10 static files at a rate of 100 megabits per

    - by Mark Maunder
    I've looked at Amazon S3 and it costs roughly $4746 per month for 100 megabits/s (which translates into 31,640 Gigabytes of data transferred. That's at a rate of $0.15 per gig.) I haven't found a cheaper "cloud" option. I'm curious if there's any other cloud hosting option out there cheaper than S3. Uptime is not an issue because I can build failover for most things into the browser. e.g. I can use javascript to say "if the image didn't load then go to this other URL instead." FYI I'm currently using a colocation facility which is about 30% cheaper than S3 and I'm familiar with colo prices - so this question is really about "cloud" services and by that I mean services where I don't have to worry about the infrastructure.

    Read the article

  • Nginx server_name is set to mydomain.com, so why is www.mydomain.com getting served too?

    - by Lorenz Forvang
    I have my Nginx conf set up as follows: server { listen 443 ssl; server_name mydomain.com; ... } When I load https://mydomain.com, the site loads fine. But when I load https://www.mydomain.com, the site loads as well. Why is this happening? I set up the DNS records using Amazon Route 53 as: A mydomain.com xxx.xxx.xxx.xxx (IP) CNAME www.mydomain.com mydomain.com So is a request to www.mydomain.com arriving at Nginx as a request to mydomain.com? If so, how do I differentiate requests to www.mydomain.com and mydomain.com at my server?

    Read the article

  • Using AWS SES with Sendmail

    - by Abs
    I am trying to send mail via AWS SES uisng Sendmail. I have Sendmail version 8.14.4 installed and I followed the first section of this useful tutorial by Amazon. However, I get this: root@:/etc/mail# echo "Subject: test" | sendmail -v [email protected] [email protected]... Connecting to [127.0.0.1] via relay... [email protected]... Deferred: Connection timed out with [127.0.0.1] Can anyone help me get this working? The logs have the following: Dec 14 10:35:21 ip-10-xx-xx-181 sm-msp-queue[17910]: qBE8K1Lu016411: to=root, delay=00:21:24, xdelay=00:06:19, mailer=relay, pri=121806, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1]

    Read the article

  • Where would an S3 upload speed cap originate?

    - by CoreyH
    I do a ton of uploading to S3 and am experiencing capped speeds and I can't quite figure out how to address it. The setup: Windows Server 2008 R2 x64, external HD, using a Java based upload tool called Jsh3ll and custom VBS scripts to kick the jobs off. Running one process at a time, I am always limited to about 4mbps. I have FiOS at 35/35mbps speeds, so it isn't an outright limit. AND, I can run parallel instances and can go all the way up to 35mbps, so I know the problem isn't gateway/nic/machine/amazon related. Running parallel instances works to a degree as a solution, but increases the complexity of my workflow greatly. Solving this would make my life dramatically easier. When I was first doing this I was playing around with a bunch of Windows TCP parameters and was able to briefly get unconstrained bandwidth, but it wasn't repeatable. Thoughts?

    Read the article

  • How to Access an AWS Instance with RDC when behind a Private Subnet of a VPC

    - by dalej
    We are implementing a typical Amazon VPC with Public and Private Address - with all servers running the Windows platform. The MS SQL instances will be on the private subnet with all IIS/web servers on the public subnet. We have followed the detailed instructions at Scenario 2: VPC with Public and Private Subnets and everything works properly - until the point where you want to set up a Remote Desktop Connection into the SQL server(s) on the private subnet. At this point, the instructions assume you are accessing a server on the public subnet and it is not clear what is required to RDC to a server on a private subnet. It would make sense that some sort of port redirection is necessary - perhaps accessing the EIP of the Nat instance to hit a particular SQL server? Or perhaps use an Elastic Load Balancer (even though this is really for http protocols)? But it is not obvious what additional setup is required for such a Remote Desktop Connection?

    Read the article

  • Where and how does Kindle Cloud Reader store downloaded books, on a Windows 7 system?

    - by einpoklum
    I use Firefox and sometimes Chrome, on Windows 7. Amazon's in-browser Kindle Cloud Reader lets you "download" books for local/offline viewing. Where are these stored, given my OS+browser combination? I've searched the Users subdirectory for my user, and could not find a relevant (separate) file in there, specifically not in the Firefox and Chrome profile directories. To clarify, the files are obviously not downloaded as-is and are stored in some potentially-obfuscated format, possibly in the browser's local store and possibly elsewhere. The question is, where and how exactly? (This was the first of this question, but wasn't answered there since it was not the main focus of the question.)

    Read the article

  • create a CNAME record for AWS LoadBalancer DNS name

    - by t q
    I am trying to setup a loadBalancer on AWS. The A-Record it gave me looks like myLoadBalancer-**********.us-east-1.elb.amazonaws.com however when i try to put that in my domain registrars A-Record, i get an errorIP address is not valid. Must be of type x.x.x.x where x is 0-255. amazons solution is you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. route 53 gives me DNS numbers but if i use that then my email doesnt work from the registrar. question: is there a way to use route 53 and retain my emails? or should i create a CNAME record for the LoadBalancer DNS name, if so how do i do this ... not sure what this means?

    Read the article

  • Problems with the backup

    - by marcodv
    I did a script which run around 4 o'clock in the morning, for backup all the mysql databases and the config file for 250 linux vm. The problem is that it tooks ages for complete and more than 50% of these vm, need more than 8 hours for complete. More or less all the vm had the same configuration,I mean Same amount of ram same amount of disk space same number of cpu Debian 6.0.5 I am saving these backup on amazon s3, because is the cheapest solutions that I've found. Now my questions is: Has anyone some solutions or suggestions about that? On one blog I've read that probably the ionice and nice combination could be good work around about that. any thought?

    Read the article

  • Backing Up User Data when data is not in use. Should I be concerned?

    - by jberryman
    This may be a dumb question. I would like to use duplicity to make backups to Amazon S3 of directories, each of which contains a different user's data. Each directory could be written to at any time. So I have two questions: Should I be concerned that a scheduled backup of a directory might occur in the middle of data being written to files in the directory, resulting in a corrupted backup? And if that is a valid concern, how would I go about temporarily delaying an operation while IO was happening, to try to minimize that effect. Thanks for the advice

    Read the article

  • How to maximize parallel download from S3

    - by StCee
    I got a lot of images to load from Amazon S3 on a single page, and sometimes it takes quite some time to load all the images. I heard that splitting the images to load from different sub-domains would help parallel downloads, however what is the actual implementation on that? While it is easy to split for sub-domains like static,image, etc; Should I make like 10 sub-domains (image1, image2...) to load say 100 images? Or is there some clever ways to do? (By the way I am considering using memcache to cache the S3images; I am not sure if it is possible. I would be grateful for any further comments. Thanks a lot!

    Read the article

  • Redirect all subdomains to subfolders

    - by alfonso
    I'd like to add a rule so that all subdomains get redirected to a subfolder. For example: app1.example.com -> example.com/app1 app2.example.com -> example.com/app2 something.example.com -> example.com/something All subdomains will only be one level deep. Questions Which DNS providers allow me to do this? Are these alternatives feasible? Redirect them all to a special webapp with a static IP that redirects to the proper subfolder. How can I know from which subdomain they came from? Programatically create each rule when I need it. Which DNS providers have API access to add rules? I think Amazon Route 53 might be the answer here.

    Read the article

  • SQL – What is the latest Version of NuoDB? – A Quick Contest to Get Amazon Gift Cards

    - by Pinal Dave
    We had a great contest earlier last week - What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit. It has received quite a few responses. Just like any other contest, not everyone was winner. The kind folks at NuoDB decided to give another chance to everyone who have not won in the last contest. This means if you have missed to take part in the earlier contest or if you have taken part and not won, you still have one more chance to win Amazon Gift Card. Here is the quick contest: You just have to go and download NuoDB. The first 10 people who will download the NuoDB will get 10 – USD 10 cards. Remaining everyone will be entered into a lucky draw of Amazon Gift cards of USD 50. Winners will be announced in next 24 hours. Bonus Round: If you have entered in the contest above, you can also enter to win latest Beginning SSRS Joes 2 Pros book. You just have to leave a comment over here with your experience about your experience with NuoDB and what is the latest version of the product. Here are few of the blog post I wrote earlier on that subject: Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB Part 5 - NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • AWS lighttpd: Sending a copy of requests to test.

    - by Martin
    I have a load balanced service on AWS. So the ELB evenly distributes the load across my servers. Each server is running lighttpd that does logging and forwards the requests to my service (on the same machine). I have written a new version of the service. It is installed and running on an EC2 machine test1 (basically a mirror of our current server but the new service running instead of the original) and I have done some preliminary tests that look good. But what I would like to do is mirror a fraction of incoming traffic to the new version of the service so I can do some comparisons between an original version and the new version based on real traffic. Thus I was thinking I could modify one box behind the ELB to duplicate its traffic to the test1. I was thinking I could modify the configuration of lighttpd so that each request is mirrored/duplicated. i.e. the original service keeps responding as before but a mirror request is sent to test1 but the reply is just dropped). Unfortunately I have not been able to work this out. Any ideas on how I could mirror the requests from one box to itself and test1. Or any other ideas for testing.

    Read the article

  • How can I do an SELINUX filesystem relabel without rebooting first?

    - by Skaperen
    I can touch the file /.autorelabel and reboot and during the initialization coming back up it will do the SELINUX relabel for me. But I want to do this in a different situation where the system has just been copied to a hard drive image. I can chroot to the originating file tree, or chroot to the just populated device image and run it. I just can't find anything that says what to be run. This image is being made into an AMI on AWS EC2, and contains CentOS 6.3. But the time it takes to relabel is too long (6 minutes or more). I want to move the relabel to the image build where the extra time is not an issue (because it happens once instead of every time an AMI is launched). I can make this relabel be the very last thing just before the filesystem is unmounted for the last time until it becomes an AMI and will launch. I just need to know what to call to do it. I have searched man pages with no luck. I have searched system init scripts but where /.autorelabel is detected, it is unclear what is happening. Documents like http://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-fsrelabel.html only tell how to do things that still really do the work after a reboot. I need to have the work doing BEFORE the "reboot" (unmount, build AMI, and launch ready to go). The big point is ... yes there will be a reboot ... but I want the relabel work to be done before that so it won't be done every time an AMI is launched (because it takes so long).

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Issues Deploying Functional WAR to Elastic Beanstalk with Tomcat7

    - by BFar
    I am currently deploying OpenTripPlanner (http://github.com/OpenPlans/OpenTripPlanner.git) to Elastic Beanstalk. I'm able to successfully build and deploy opentripplanner with my own customized settings on an ec2. I have set it up so that the appropriate WAR file can be placed in the Tomcat/Webapps folder, and when Tomcat is started up, it will auto-deploy, and even download open trip planner's graph.obj from an S3. All of that works just fine, except when I try to deploy to Elastic Beanstalk. When I upload to Elastic Beanstalk, the log shows that my WAR file is successfully unpacked & successfully downloads the graph.obj from my S3. The only difference is that then nothing happens and I can't load the site in my browser. The health is RED, and I can't figure out what is going on. I've tried looking into ports and dns issues, but I can't determine what's wrong. Anyone have any ideas? Why would a WAR that works on tomcat7 outside of Beanstalk fail to be accessible?

    Read the article

  • Merely installing PHP5 causes my AWS Ubuntu server to die minutes later from a massive CPU spike

    - by Mark Amery
    I have an AWS server with Ubuntu 11.04 as the OS that is running an Apache2 webserver (incidentally Python-based and using Django). We recently needed to add support for php5 to let us use a third party PHP library (incidentally for serving minified versions of js and css files). However, for no reason any of us can discern, if we simply run sudo apt-get install php5 on the server, then the install appears to finish successfully but, without us taking any further action (including not yet running sudo apt-get install libapache2-mod-php5, which I think would be the next step for us if everything worked), or actually running any PHP scripts on the server, a few minutes later the server becomes impossible to connect to, and looking at the 'Monitoring' tab for the server in the EC2 Management Console reveals that a while after the installation, CPU usage spikes to 100% and stays there permanently (until we reboot the server from the AWS Console). After rebooting, the server also reliably dies within a few (between 0 and 10) minutes. We restored the server to a pre-PHP state from an AMI Image, observed that it was stable, and then tried installing PHP5 again and observed the server die in exactly the same way, so we're pretty much certain that installing PHP5 is what causes the symptoms. What on earth could be causing this behaviour, and how can we get PHP installed on the server without it dying?

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >