Search Results

Search found 2912 results on 117 pages for 'amazon vpc'.

Page 15/117 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Grails: GGTS not running on Amazon AWS EC2 anyone else successful?

    - by Anonymous Human
    Im just curious if anyone has had success trying to run the Groovy Grails tool suite on an Amazon AWS EC2 instance with its display exported into your windows machine. If so, I wanted to know which flavor of linux was used on the EC2. I am not having much success with it on the Amazon Linux but haven't tried their Ubuntu instances yet. I got all the way to getting GGTS installed and getting the display exported but when I launch GGTS I get log errors about libraries missing. This is most likely because I didn't use yum to install it so I am probably missing dependencies but I didn't have a choice its not offered as a yum package. Here are my log file errors when I try to launch GGTS: !SESSION 2014-06-08 03:08:04.873 ----------------------------------------------- eclipse.buildId=3.5.1.201405030657-RELEASE-e43 java.version=1.7.0_55 java.vendor=Oracle Corporation Framework arguments: -product org.springsource.ggts.ide Command-line arguments: -os linux -ws gtk -arch x86_64 -product org.springsourc e.ggts.ide !ENTRY org.eclipse.osgi 4 0 2014-06-08 03:08:12.116 !MESSAGE Application error !STACK 1 java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: /home/ec2-user/ggts_sh/ggts-3.5.1.RELEASE/configuration/org.eclipse.osgi /bundles/704/1/.cp/libswt-pi-gtk-4335.so: libgtk-x11-2.0.so.0: cannot open share d object file: No such file or directory no swt-pi-gtk in java.library.path /home/ec2-user/.swt/lib/linux/x86_64/libswt-pi-gtk-4335.so: libgtk-x11-2 .0.so.0: cannot open shared object file: No such file or directory Can't load library: /home/ec2-user/.swt/lib/linux/x86_64/libswt-pi-gtk.s o at org.eclipse.swt.internal.Library.loadLibrary(Library.java:331) at org.eclipse.swt.internal.Library.loadLibrary(Library.java:240) at org.eclipse.swt.internal.gtk.OS.<clinit>(OS.java:45) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:63) at org.eclipse.swt.internal.Converter.wcsToMbcs(Converter.java:54) at org.eclipse.swt.widgets.Display.<clinit>(Display.java:133) at org.eclipse.ui.internal.Workbench.createDisplay(Workbench.java:679) at org.eclipse.ui.PlatformUI.createDisplay(PlatformUI.java:162) at org.eclipse.ui.internal.ide.application.IDEApplication.createDisplay( IDEApplication.java:154) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEAppli cation.java:96) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandl e.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runAppli cation(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(Ec lipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja va:354) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja va:181) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:636) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:591) at org.eclipse.equinox.launcher.Main.run(Main.java:1450) at org.eclipse.equinox.launcher.Main.main(Main.java:1426)

    Read the article

  • Adding Multiple Interfaces to EC2 Ubuntu 12.04

    - by nocode
    I have a m1.medium Ubuntu 12.04 instance with two ENI's. I have a VPC setup with a private and public subnet. Private: 10.50.1.0/24 Public: 10.50.101.0/24 I initiated the instance on the private subnet. I configured a NAT instance and route all servers in the private subnet internet access. The route tables on the private subnet point towards the NAT instance and the route table on the public subnet point to the internet gateway. I am trying to add a public interface on the machine so that I can put it behind a ELB. When I added the second ENI and configured a static IP in /etc/network/interfaces and restarted the network services, I can no longer access from the Public subnet to the Private Subnet. Works Private private Private public Does not work Public private From Public Private, I ran a TCPDUMp on the private machine and can see the request coming in. My guess is it's trying to route over the new Public interface instead of the Private. Here's my route: default 10.50.1.1 0.0.0.0 UG 100 0 0 eth0 10.50.1.0 * 255.255.255.0 U 0 0 0 eth0 10.50.101.0 * 255.255.255.0 U 0 0 0 eth1 My networking knowledge is limited and I believe I have to add some routes but unsure of what command/syntax needs to be.

    Read the article

  • Stange stream of HTTP GET requests in apache logs, from amazon ec2 instances

    - by Alexandre Boeglin
    I just had a look at my apache logs, and I see a lot of very similar requests: GET / HTTP/1.1 User-Agent: curl/7.24.0 (i386-redhat-linux-gnu) libcurl/7.24.0 \ NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2 Host: [my_domain].org Accept: */* there's a steady stream of those, about 2 or 3 per minute; they all request the same domain and resource (there are slight variations in user agent version numbers); they come form a lot of different IPv4 and IPv6 addresses, in blocs that belong to amazon ec2 (in Singapore, Japan, Ireland and the USA). I tried to look for an explanation online, or even just similar stories, but couldn't find any. Has anyone got a clue as to what this is? It doesn't look malicious per say, but it's just annoying me, and I couldn't find any more information about it. I first suspected it could be a bot checking if my server is still up, but: I don't remember subscribing to such a service; why would it need to check my site twice every minute; why doesn't it use a clearly identifying fqdn. Or, should I send this question to amazon, via their abuse contact? Thanks!

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Amazon EC2 hostnames

    - by Firefly
    I'm currently trying to setup a Tigase cluster on Amazon EC2 instances in a VPC and I'm having troubles getting it to work due to the hostnames of the instances not being "full DNS names". According to the Tigase documentation: Please note the proper DNS configuration is critical for the cluster to work correctly. Make sure the 'hostname' command returns a full DNS name on each cluster node. Can anyone explain what a full DNS name is and how I can set my instances to use one? Currently my instances get a default hostname of the form "ip-10-0-0-20".

    Read the article

  • Amazon EC2 master node hanging

    - by Algorist
    Hi, I am using cloudera setup to launch a cluster with hadoop on Amazon. Sometimes, the master hadoop node hangs and we have to restart the job from the job. Did anyone face similar problem and resolve the issue. Thank you.

    Read the article

  • Alternative to Amazon's S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • Use personal Amazon S3 account to backup Gmail using Ubuntu

    - by lokheart
    As question, I have found online that backupify offer shifting from their S3 to user's personal S3, I have a backupify account but I can't find this options, besides, I don't prefer having my email being processed by somebody else. Is it possible to use my own personal amazon s3 account to backup gmail? Preferably as incremental backup, as I don't have to use too much of bandwidth to load redundant data back to S3. I am using ubuntu, so script is OK for me. Thanks!

    Read the article

  • Remote backup with Amazon S3

    - by mjr
    Does anyone know of some good Mac software to sync a network attached storage device or a drobo to Amazon S3? I'd love to mail them a storage device for the first dump and then do nightly syncs.

    Read the article

  • Amazon S3 tools for Debian?

    - by Jonik
    I need to (programmatically, in a shell script) upload an EAR file to an Amazon S3 bucket on Debian (5.0.4). What, if any, Debian package provides simple, scriptable tools for that? (I want raw S3 bucket access, so please don't suggest solutions like Jungle Disk.)

    Read the article

  • Site-to-Site vpn setup amazon ec2 openswan (left) and cisco asa 5540 (right)

    - by user197279
    Need help on this VPN set-up on amazon EC2 using openswan Left side: EC2: setup a peer ip:- according to client using cisco (must be public) encrypted network:- according to client using cisco (must be public) Right side: Cisco ASA 5540: Peer ip: 3.3.3.3 Peer host/rightsubnet: 3.3.3.30/32 (Public NAT'd ip) The goal is to setup a site-to-site vpn connection with the client and I need guidance on the setup required on EC2. Appreciate the help Thanks.

    Read the article

  • Amazon S3 bucket - download only certain files

    - by mottey
    Hi I have an Amazon S3 bucket with 10,000 images sitting in it with a standard naming convention: 001_small.jpg 001_large.jpg 002_small.jpg 002_large.jpg Because there are such a large amount of files I don't want to download ALL of them and I don't want to sit there for a couple of hours to select just the *_large.jpg files... Can someone suggest an S3 file manager that can let me select only the *_large.jpg files to download?? Thanks!

    Read the article

  • Amazon ec2 reserve instance

    - by lydonchandra
    Hi, We received the education credit (valid for 1 year) from Amazon to use, and just wondering if we can buy reserved instance (3years) using that credit? Is there any way to reserve how much bandwidth we can use too ? Thanks

    Read the article

  • Amazon ec2 reserve instance

    - by lydonchandra
    Hi, We received the education credit (valid for 1 year) from Amazon to use, and just wondering if we can buy reserved instance (3years) using that credit? Is there any way to reserve how much bandwidth we can use too ? Thanks

    Read the article

  • Amazon EC2 Ami recommendations for free tier?

    - by Console
    Amazon web services recently introduced a free tier, where you basically get free stuff to try out AWS and run tiny sites and projects. Basically it's free as long as you remain below a certain limit of bandwidth, disk storage etc. Since going over the limits can quickly become quite expensive (for a hobbyist) I would like some recommendations or suggestions about which AMI's I can run on the free tier, for the purpose of trying out Ruby on Rails and/or Django.

    Read the article

  • Creating own Amazon Machine Image - Kernel panic

    - by amra
    I have created own AMI and registered it on Amazon EC2. But while AMI startup I receive following error: Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,1) The image is running locally without any problems. fstab contains: proc /proc proc defaults 0 0 /dev/sda1 / ext3 relatime,errors=remount-ro 0 1 thx for help

    Read the article

  • Problem install phpmyadmin on amazon ec2?

    - by yoko
    I googled on how to install phpmyadmin on ec2, and i got this syntax: sudo yum install phpmyadmin But i keep getting this: Loaded plugins: fastestmirror, priorities, security Loading mirror speeds from cached hostfile amzn-main | 2.1 kB 00:00 amzn-updates | 2.1 kB 00:00 Setting up Install Process No package phpmyadmin available. Error: Nothing to do I tried to go my website, its not installed. Please help EDIT: My Server OS: Amazon Linux AMI 64 bit I tried: yum install phpmyadmin --enablerepo=development, but still I got this error: Loaded plugins: fastestmirror, priorities, security Error getting repository data for development, repository not found

    Read the article

  • Amazon EC2 + SSL Godaddy are not routing properly in HTTPS

    - by azngunit81
    I have an Amazon EC2 + SSL just installed on GoDaddy. I have successfully managed to install it and get the green https on the main domain https://www.example.com however it doesn't any https://www.example.com/something but the route works under http://www.example.com I am using an .htacess file for some rewrite. Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] the Ec2 instance is ubuntu if that helps in anyway.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >