Search Results

Search found 2978 results on 120 pages for 'amazon aws'.

Page 28/120 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Transparent Technology from Amazon

    - by David Dorf
    Amazon has been making some interesting moves again, this time in the augmented humanity area.  Augmented humanity is about helping humans overcome their shortcomings using technology.  Putting a powerful smartphone in your pocket helps you in many ways like navigating streets, communicating with far off friends, and accessing information.  But the interface for smartphones is somewhat limiting and unnatural, so companies have been looking for ways to make the technology more transparent and therefore easier to use. When Apple helped us drop the stylus, we took a giant leap forward in simplicity.  Using touchscreens with intuitive gestures was part of the iPhone's original appeal.  People don't want to know that technology is there -- they just want the benefits.  So what's the next leap beyond the touchscreen to make smartphones even easier to use? Two natural ways we interact with the world around us is by using sight and voice.  Google and Apple have been using both in their mobile platforms for limited uses cases.  Nobody actually wants to type a text message, so why not just speak it?  Any if you want more information about a book, why not just snap a picture of the cover?  That's much more accurate than trying to key the title and/or author. So what's Amazon been doing?  First, Amazon released a new iPhone app called Flow that allows iPhone users to see information about products in context.  Yes, its an augmented reality app that uses the phone's camera to view products, and overlays data about the products on the screen.  For the most part it requires the barcode to be visible to correctly identify the product, but I believe it can also recognize certain logos as well.  Download the app and try it out but don't expect perfection.  Its good enough to demonstrate the concept, but its far from accurate enough.  (MobileBeat did a pretty good review.)  Extrapolate to the future and we might just have a heads-up display in our eyeglasses. The second interesting area is voice response, for which Siri is getting lots of attention.  Amazon may have purchased a voice recognition company called Yap, although the deal is not confirmed.  But it would make perfect sense, especially with the Kindle Fire in Amazon's lineup. I believe over the next 3-5 years the way in which we interact with smartphones will mature, and they will become more transparent yet more important to our daily lives.  This will, of course, impact the way we shop, making information more readily accessible than it already is.  Amazon seems to be positioning itself to be at the forefront of this trend, so we should be watching them carefully.

    Read the article

  • Email notification and mail server

    - by Jerr Wu
    I am building a web application with email notification just like Facebook, which will host in http://www.linode.com/. When a user A comment to a post, the poster will get an email notification from '[email protected]' with the comment message written by user A. (Not spam) I really like Google Apps but they have sending limits 2000 sending per day, that is not suit for my case becuz I cannot have sending limits. There will be many email notifications. http://support.google.com/a/bin/answer.py?hl=en&answer=166852 I also need company email accounts for team members use which I prefer Google Apps. My web application will host in linode, I am considering "Amazon Simple Notification Service" for the email notification. My questions are Any other recommend email service provider suits my case for me? Can I bind company email accounts(ex: [email protected]) with Google Apps and bind [email protected] with other email service provider?

    Read the article

  • How can I limit CloudFront downloads

    - by Alex Crouzen
    I'm looking to use Amazon's CloudFront to host some content in the near future. Currently, I'm keeping it very simple and I'm just uploading my content to S3 and then making a distribution available via Cloudfront. However, because I have a limited budget, I'd like to be able to limit the number of downloads or the money spent on bandwidth. As far as I can see, I can't set any quotas or budgets like you can in Google's App Engine, so I'm looking for another way of doing this. Has anyone had any experience doing this? One approach I'm thinking of is having to place a webserver with redirects in between, but that kind of defeats the simplicity of CF for me.

    Read the article

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • Amazon S3 supporte CORS, la spécification ouvre les ressources de la plateforme Cloud aux applications Web

    Amazon S3 supporte CORS la spécification ouvre les ressources de la plateforme Cloud aux applications Web Amazon Web Services (AWS) a ajouté le support de la spécification CORS (Cross-Origin Resource Sharing) à sa plateforme Cloud Amazon S3. CORS est une norme qui offre aux développeurs la possibilité de créer des applications Web qui effectuent des demandes à des domaines autres que celui qui a fourni le contenu principal. L'intégration de CORS à Amazon S3 permet désormais de créer plus facilement des applications Web qui accèdent aux données stockées sur la plateforme Cloud. Les développeurs peuvent utiliser la spécification CORS pour créer des applications Web en JavaScript et ...

    Read the article

  • How do you keep up with Nagios/Capistrano configs when using EC2?

    - by imaginative
    I use Amazon EC2 for my mobile app. Depending on load of the application at a given time, I might spawn new instances and then take them down when load is lower to save costs. How does one keep up with Nagios configurations for such a dynamic environment? When one deals with managed hardware, configuration files are predictable. In this case Nagios, Capistrano and a bunch of other configuration files would need to be added. Capistrano needs to know where to deploy a new build to for an app server. Nagios needs to know to remove an existing instance or add a new instance for monitoring. Nagios also needs to know if a node was intentionally taken down or if the host is down due to error. How is this done with the wonderful world of VPS/dynamic instances?

    Read the article

  • Why S3 website redirect location is not followed by CloudFront?

    - by ychaze
    I have a website hosted on Amazon S3. It is the new version of an old website hosted on WordPress. I have set up some files with the metadata Website Redirect Locationto handle old location and redirect them to the new website pages. For example: I had http://www.mysite.com/solution that I want to redirect to http://mysite.s3-website-us-east-1.amazonaws.com/product.html So I created an empty file named solutioninside my bucket with the correct metadata: Website Redirect Location= /product.html The S3 redirect metadata is equivalent to a 301 Moved Permanentlythat is great for SEO. This works great when accessing the URL directly from S3 domain. I have also set up a CloudFront distribution based on the website bucket. And when I try to access through my distribution, the redirect does not work, ie: http://xxxx123.cloudfront.net/solution does not redirect but download the empty file instead. So my question is how to keep the redirection through the CloudFront distribution ? Or any idea on how to handle the redirection without deteriorate SEO ? Thanks

    Read the article

  • Backup Mongodb on EC2 through EBS snapshots - timing issue

    - by DmitrySemenov
    I'm following this guidance http://docs.mongodb.org/ecosystem/tutorial/backup-and-restore-mongodb-on-amazon-ec2/ I have 4 EBS 1000 IOPS volumes assigned to instance These 4 volumes through MDADM assembled into software RAID10 array. I want to do backups through EBS Snapshots as explained in the article above Question: Mongodb says - that I need to mongo shelldb.runCommand({fsync:1,lock:1}); -- this will lock the db for writing ....run snapshot creation... mongo shell db.$cmd.sys.unlock.findOne(); -- this will unlock the db for writing so do I need to unlock the DB for writing after I issued the comand ec2-create-snapshot or after it's finished and the actual snapshot is created thanks, Dmitry

    Read the article

  • Mobile (Client) to Amazon S3 (Server) - Architecture

    - by wasabii
    let's start off with the problem statement: My iOS application has a login form. When the user logs in, a call is made to my API and access granted or denied. If access was granted, I want the user to be able to upload pictures to his account and/or manage them. As storage I've picked Amazon S3, and I figured it'd be a good idea to have one bucket called "myappphotos" for instance, which contains lots of folders. The folder names are hashes of a user's email and a secret key. So, every user has his own, unique folder in my Amazon S3 bucket. Since I've just recently started working with AWS, here's my question: What are the best practices for setting up a system like this? I want the user to be able to upload pictures directly to Amazon S3, but of course I cannot hard-code the access key. So I need my API to somehow talk to Amazon and request an access token of sorts - only for the particular folder that belongs to the user I'm making the request for. Can anyone help me out and/or guide me to some sources where a similar problem was addressed? Don't think I'm the first one and the amazon documentation is so extensive that I don't really know where to start looking. Thanks a lot!

    Read the article

  • Creating Signed URLs for Amazon CloudFront

    - by Zack
    Short version: How do I make signed URLs "on-demand" to mimic Nginx's X-Accel-Redirect behavior (i.e. protecting downloads) with Amazon CloudFront/S3 using Python. I've got a Django server up and running with an Nginx front-end. I've been getting hammered with requests to it and recently had to install it as a Tornado WSGI application to prevent it from crashing in FastCGI mode. Now I'm having an issue with my server getting bogged down (i.e. most of its bandwidth is being used up) due to too many requests for media being made to it, I've been looking into CDNs and I believe Amazon CloudFront/S3 would be the proper solution for me. I've been using Nginx's X-Accel-Redirect header to protect the files from unauthorized downloading, but I don't have that ability with CloudFront/S3--however they do offer signed URLs. I'm no Python expert by far and definitely don't know how to create a Signed URL properly, so I was hoping someone would have a link for how to make these URLs "on-demand" or would be willing to explain how to here, it would be greatly appreciated. Also, is this the proper solution, even? I'm not too familiar with CDNs, is there a CDN that would be better suited for this?

    Read the article

  • Amazon S3 and swfaddress

    - by justinbach
    I recently migrated a large AS3 site (lots of swfs, lots of flvs) to Amazon S3. Pretty much everything but HTML and JS files is being stored/served from Amazon, and it's working well. The only problem I'm having is that I built the site using SWFaddress (actually, via the Gaia framework which uses SWFaddress), and for some reason, SWFaddress is no longer updating the address bar correctly as users navigate from page to page. In other words, the URL persistently remains http://www.mysite.com, not http://www.mysite.com/#/section as would be the case were SWFaddress functioning correctly (and as it was functioning prior to the migration). Stranger yet, if I go to (e.g.) http://www.mysite.com/#/section directly, the deeplinking functions as you'd expect--I arrive directly at the correct section. However, navigating away from that section doesn't have any effect on the address bar, despite the fact that it should be dynamically updated. I've got a crossdomain.xml file set up on the site that allows access from all domains, so that's not the issue, and I don't know what else might be. Any ideas would be greatly appreciated! P.S. I integrated S3 by putting pretty much the entire site in an S3 bucket and then just changing the initial swfobject embed to point to the S3 instance of main.swf, passing in the S3 path as the "base" param to the embedded swf so that all dynamically loaded assets and swfs would also be sourced from s3. Dunno if that's related to the troubles I'm having.

    Read the article

  • Reading in gzipped data from S3 in Ruby

    - by Evan Zamir
    My company has data messages (json) stored in gzipped files on Amazon S3. I want to use Ruby to iterate through the files and do some analytics. I started to use the 'aws/s3' gem, and get get each file as an object: #<AWS::S3::S3Object:0x4xxx4760 '/my.company.archive/data/msg/20131030093336.json.gz'> But once I have this object, I do not know how to unzip it or even access the data inside of it.

    Read the article

  • VPC SSH port forward into private subnet

    - by CP510
    Ok, so I've been racking my brain for DAYS on this dilema. I have a VPC setup with a public subnet, and a private subnet. The NAT is in place of course. I can connect from SSH into a instance in the public subnet, as well as the NAT. I can even ssh connect to the private instance from the public instance. I changed the SSHD configuration on the private instance to accept both port 22 and an arbitrary port number 1300. That works fine. But I need to set it up so that I can connect to the private instance directly using the 1300 port number, ie. ssh -i keyfile.pem [email protected] -p 1300 and 1.2.3.4 should route it to the internal server 10.10.10.10. Now I heard iptables is the job for this, so I went ahead and researched and played around with some routing with that. These are the rules I have setup on the public instance (not the NAT). I didn't want to use the NAT for this since AWS apperantly pre-configures the NAT instances when you set them up and I heard using iptables can mess that up. *filter :INPUT ACCEPT [129:12186] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [84:10472] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 1300 -j ACCEPT -A INPUT -d 10.10.10.10/32 -p tcp -m limit --limit 5/min -j LOG --log-prefix "SSH Dropped: " -A FORWARD -d 10.10.10.10/32 -p tcp -m tcp --dport 1300 -j ACCEPT -A OUTPUT -o lo -j ACCEPT COMMIT # Completed on Wed Apr 17 04:19:29 2013 # Generated by iptables-save v1.4.12 on Wed Apr 17 04:19:29 2013 *nat :PREROUTING ACCEPT [2:104] :INPUT ACCEPT [2:104] :OUTPUT ACCEPT [6:681] :POSTROUTING ACCEPT [7:745] -A PREROUTING -i eth0 -p tcp -m tcp --dport 1300 -j DNAT --to-destination 10.10.10.10:1300 -A POSTROUTING -p tcp -m tcp --dport 1300 -j MASQUERADE COMMIT So when I try this from home. It just times out. No connection refused messages or anything. And I can't seem to find any log messages about dropped packets. My security groups and ACL settings allow communications on these ports in both directions in both subnets and on the NAT. I'm at a loss. What am I doing wrong?

    Read the article

  • High mysql server load, sar output

    - by eric
    I have a MySQL Server that should be performing better than it seems to be. We're running ubuntu on a Amazon Cluster Compute (cc1.4xlarge) Linux ip-10-0-1-60 3.2.0-25-virtual #40-Ubuntu SMP Wed May 23 22:20:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise I have several output files from sar that i'm not really sure how to interpret. For example, I ran: # Individual block device I/O activities sar -d 1 180 > logs/block_device_io.log & which gave me what looks like really high utilisation of my disk (turns out this block device maps to /dev/xvdh on /var/lib/mysql type ext4 (rw,_netdev) The output from my log: 10:48:59 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 10:49:00 PM dev202-16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev202-32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev202-112 1008.00 31040.00 1416.00 32.20 1.02 1.01 0.89 90.00 10:49:00 PM dev202-80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Am I wrong in thinking this is a problem? I have it above 90% almost the entire time we're seeing slowness. Or does this just mean MySQL is doing what it's supposed to do?

    Read the article

  • Why is uploading to S3 so slow?

    - by Tom Marthenal
    I am using s3cmd to upload to S3: # s3cmd put 1gb.bin s3://my-bucket/1gb.bin 1gb.bin -> s3://my-bucket/1gb.bin [1 of 1] 366706688 of 1073741824 34% in 371s 963.22 kB/s I am uploading from Linode, which has an outgoing bandwidth cap of 50 Mb/s according to support (roughly 6 MB/s). Why am I getting such slow upload speeds to S3, and how can I improve them? Update: Uploading the same file via SCP to an m1.medium EC2 instance (SCP from my Linode to the instance's EBS drive) gives about 44 Mb/s according to iftop (any compression done by the cipher is not a factor). Traceroute: Here's a traceroute to the server it's uploading to (according to tcpdump). # traceroute s3-1-w.amazonaws.com. traceroute to s3-1-w.amazonaws.com. (72.21.194.32), 30 hops max, 60 byte packets 1 207.99.1.13 (207.99.1.13) 0.635 ms 0.743 ms 0.723 ms 2 207.99.53.41 (207.99.53.41) 0.683 ms 0.865 ms 0.915 ms 3 vlan801.tbr1.mmu.nac.net (209.123.10.9) 0.397 ms 0.541 ms 0.527 ms 4 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.400 ms 1.481 ms 1.508 ms 5 0.gi-0-0-0.pr1.tl9.nac.net (209.123.11.62) 1.602 ms 1.677 ms 1.699 ms 6 equinix02-iad2.amazon.com (206.223.115.35) 9.393 ms 8.925 ms 8.900 ms 7 72.21.220.41 (72.21.220.41) 32.610 ms 9.812 ms 9.789 ms 8 72.21.222.141 (72.21.222.141) 9.519 ms 9.439 ms 9.443 ms 9 72.21.218.3 (72.21.218.3) 10.245 ms 10.202 ms 10.154 ms 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * The latency looks reasonable, at least until the server stopped responding to ping requests.

    Read the article

  • Openfire on EC2 with Jingle

    - by Bjorn Roche
    I would like to run Openfire (or another XMPP server) on EC2. At the moment this is just for testing, so easy setup and configuration are important, as is low cost. At some point, however, if things go well, it will be important to scale this. Ideally, it would be nice to not have to switch software when the scaling happens, but if a switch needs to happen later it certainly can. My requirements are: basic XMPP services, including muc and pubsub. Logins controlled from an external API. Preferably, when a user attempts to connect, the XMPP server checks with the api to see if their username and password are correct, but I can also have the API keep the XMPP server up to date on new users, deleted users, pasword changes and so on. I see Openfire has a "user service" API. Not ideal, but it looks workable. Jingle, including relay and STUN. It's not at all clear to me if the Jingle Nodes plugin takes care of this. I'm a bit confused about what's required to set this up, and I'd rather know in advance than be confused along the way :). eg It seems like STUN servers require more than one IP address. Can Openfire do all this for me, including stun and media relay on a single machine? Is this hard to configure on EC2 with Openfire? What are the basic steps? Would this be easier with something else like, say Tigase? What about database? Should I use amazon's database service, or run a db on the same machine? Would the server be compatible with a service like http://www.siteuptime.com/ Thanks!

    Read the article

  • How do I configured postfix and to use SES, and still be able to forward email from unverified external addresses?

    - by Jeff
    We are using postfix for email group lists (eg "[email protected]" will go to all members) from Amazon EC2 systems. For a variety of reasons (scalability and reliability) we would like to use SES for all outgoing emails. I was able to configure postfix to use SES as the SMTP for outgoing emails. This works fine for all verified emails. But of course, when an outsider emails me at "[email protected]", it chokes. Postfix is configured to forward to my gmail account (via the virtual table), the SES rejects it because the outside user is not verified. So none of our mailing groups configured through postfix will work this way. I would be happy to rewrite all "From" addresses before sending (and simply leave the Reply To as the original sender), but I cannot seem to find a working configuration. No matter what I set in canonical or generic regexps, SES seems to reject all forwarded emails. Surely somebody must have configured postfix with SES to handle virtual addresses? How does this work?

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • Amazon java.lang.VerifyError Android

    - by easycheese
    I have been trying to submit 2 separate apps into the Amazon App store but they keep being rejected. Here is the stack trace for the first: 11-05 11:14:36.488 E/AndroidRuntime(28128): FATAL EXCEPTION: AsyncTask #1 11-05 11:14:36.488 E/AndroidRuntime(28128): java.lang.RuntimeException: An error occured while executing doInBackground() 11-05 11:14:36.488 E/AndroidRuntime(28128): at android.os.AsyncTask$3.done(AsyncTask.java:200) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.lang.Thread.run(Thread.java:1096) 11-05 11:14:36.488 E/AndroidRuntime(28128): Caused by: java.lang.VerifyError: com.companionfree.WLThemeViewer.AmazonClientManager 11-05 11:14:36.488 E/AndroidRuntime(28128): at com.companionfree.WLThemeViewer.UpdateDBs.doInBackground(UpdateDBs.java) 11-05 11:14:36.488 E/AndroidRuntime(28128): at com.companionfree.WLThemeViewer.UpdateDBs.doInBackground(UpdateDBs.java) 11-05 11:14:36.488 E/AndroidRuntime(28128): at android.os.AsyncTask$2.call(AsyncTask.java:185) 11-05 11:14:36.488 E/AndroidRuntime(28128): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 11-05 11:14:36.488 E/AndroidRuntime(28128): ... 4 more And the relevant logcat for the second 10-12 15:41:48.929 D/dalvikvm( 2451): GC_FOR_MALLOC freed 8099 objects / 524416 bytes in 34ms 10-12 15:41:49.327 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:49.828 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:50.089 I/ActivityManager( 1563): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=com.companionfree.pushup/.MainScreen } 10-12 15:41:50.099 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeafa50), pid=1563, w=1, h=1 10-12 15:41:50.099 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeafa50), pid=1563, w=1, h=1 10-12 15:41:50.139 D/SurfaceFlinger( 1563): Layer::requestBuffer(this=0xeafa50), index=0, pid=1563, w=480, h=800 success 10-12 15:41:50.189 I/ActivityManager( 1563): Start proc com.companionfree.pushup for activity com.companionfree.pushup/.MainScreen: pid=2644 uid=10129 gids={1015, 3003} 10-12 15:41:50.319 I/RPC ( 1563): rx thread timeout (1 clients): 10-12 15:41:50.359 W/dalvikvm( 2644): VFY: Lcom/companionfree/pushup/WorkoutDbAdapter; is not instance of Landroid/app/Activity; 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: bad arg 0 (into Landroid/app/Activity;) 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejecting call to Lcom/amazon/android/Kiwi;.onActivityResult (Landroid/app/Activity;IILandroid/content/Intent;)Z 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejecting opcode 0x71 at 0x0000 10-12 15:41:50.369 W/dalvikvm( 2644): VFY: rejected Lcom/companionfree/pushup/WorkoutDbAdapter;.onActivityResult (IILandroid/content/Intent;)V 10-12 15:41:50.369 W/dalvikvm( 2644): Verifier rejected class Lcom/companionfree/pushup/WorkoutDbAdapter; 10-12 15:41:50.369 D/AndroidRuntime( 2644): Shutting down VM 10-12 15:41:50.369 W/dalvikvm( 2644): threadid=1: thread exiting with uncaught exception (group=0x40025a70) 10-12 15:41:50.369 E/AndroidRuntime( 2644): FATAL EXCEPTION: main 10-12 15:41:50.369 E/AndroidRuntime( 2644): java.lang.VerifyError: com.companionfree.pushup.WorkoutDbAdapter 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.companionfree.pushup.MainScreen.onCreateMainScreen(MainScreen.java) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.companionfree.pushup.MainScreen.onCreate(MainScreen.java) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1088) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2802) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2859) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.access$2300(ActivityThread.java:136) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2179) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.os.Handler.dispatchMessage(Handler.java:99) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.os.Looper.loop(Looper.java:143) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at android.app.ActivityThread.main(ActivityThread.java:5073) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at java.lang.reflect.Method.invokeNative(Native Method) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at java.lang.reflect.Method.invoke(Method.java:521) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:858) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616) 10-12 15:41:50.369 E/AndroidRuntime( 2644): at dalvik.system.NativeStart.main(Native Method) 10-12 15:41:50.379 W/ActivityManager( 1563): Force finishing activity com.companionfree.pushup/.MainScreen 10-12 15:41:50.399 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeff6b8), pid=1563, w=1, h=1 10-12 15:41:50.399 D/SurfaceFlinger( 1563): Layer::setBuffers(this=0xeff6b8), pid=1563, w=1, h=1 10-12 15:41:50.419 D/SurfaceFlinger( 1563): Layer::requestBuffer(this=0xeff6b8), index=0, pid=1563, w=480, h=337 success 10-12 15:41:50.469 D/dalvikvm( 2451): GC_FOR_MALLOC freed 7889 objects / 521072 bytes in 105ms 10-12 15:41:50.819 I/RPC ( 1563): rx thread timeout (1 clients): I see the same verify error on both but I can't figure it out. The only common library used between the 2 apps is the FlurryAgent.jar for analytics. For the top app I have For the bottom app I have in the manifests. The only information I have been able to find out is about libraries (GSON) and needing to use dx but I am using Eclipse so that doesn't help. To make this more difficult, the error does NOT occur on the Android Market. Yet the testers at Amazon say that it FC 5/5 times on each of their devices (I tried using an emulator for their test devices and they worked fine). I know they use "wrapper" code around my app and I think it must be interfering in some way. Does anyone have any experience with this?

    Read the article

  • How to make the ec2 ami work on the Xen on Debian

    - by user67103
    Hi. Here is the scenario. We are creating a cloud App. We have a Debian on which we create a customized image and upload it to the Amazon EC2. After uploading it to the cloud we have made some more customizations and are trying to rebundle it. We are facing some issues in rebundling it. We would like to know if we could do something like this. a. Create an AMI Image on Debian b. Load it on to the Xen Hypervisor which would be over the Debian c. Customize the image d. Save the customized image e. Upload it to EC2. The issue is that I am unable to find a proper solution on how to install Xen on Debian and will the AMI on Xen work on EC2. Any suggestions/Help would be greatly appreciated. thanks. Krishna

    Read the article

  • Setup a Reverse Proxy with Nginx and Apache on EC2

    - by heavymark
    Good Day, I am currently using the free Amazon EC2 micro instance to learn Linux and server setup. I wish to setup Nginx as a reverse web proxy. I found a great article on mediatemple on how to do it: http://wiki.mediatemple.net/w/Using_Nginx_as_a_Reverse_Web_Proxy The directions work for most any server except for EC2.One difference between EC2 and MediaTemple is how IPs work. Overall EC2 instances do not know their elastic IP. So when following the wiki directions in the virtual hosts for instance instead of myip:80 for instance I put *:80. When just using Apache this works perfectly. In the apache virtual hosts I did "127.0.0.1:80" and in the Nginx I put *:80. Apache restarts, by Nginx provides an error that it cannot bind because the ip is already in use. If I could add an actual IP in the Nginx file it would work but since EC2 requires me to put in the asterisk it ends up conflicting with the apache virtual hosts entry. Anyone know a simple way around this (other than not using EC2) ;-) Thank you! Cheers, Christopher

    Read the article

  • SQL queries break our game! (Back-end server is at capacity)

    - by TimH
    We have a Facebook game that stores all persistent data in a MySQL database that is running on a large Amazon RDS instance. One of our tables is 2GB in size. If I run any queries on that table that take more than a couple of seconds, any SQL actions performed by our game will fail with the error: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity This obviously brings down our game! I've monitored CPU usage on the RDS instance during these periods, and though it does spike, it doesn't go much over 50%. Previously we were on a smaller instance size and it did hit 100%, so I'd hoped just throwing more CPU capacity at the problem would solve it. I now think it's an issue with the number of open connections. However, I've only been working with SQL for 8 months or so, so I'm no expert on MySQL configuration. Is there perhaps some configuration setting I can change to prevent these queries from overloading the server, or should I just not be running them whilst our game is up? I'm using MySQL Workbench to run the queries. Here's an example.... SELECT * FROM BlueBoxEngineDB.Transfer WHERE Amount = 1000 AND FromUserId = 4 AND Status='Complete'; As you can see, it's not overly complex. There are only 5 columns in the table. Any help would be very much appreciated - Thanks!

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >