Search Results

Search found 3148 results on 126 pages for 'amazon s3'.

Page 42/126 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Weird DNS bug - external server resolves to internal IP

    - by emilecantin
    I have a server that is hosted by my university. I have root access, but no control over network setup, firewall, etc. This server's DNS resolves to an internal IP here on campus (10.x.x.x), and an external IP outside campus. I also have a few servers hosted at Amazon, and they mostly work well. However, one of them started to resolve the university server by its internal IP address. This causes problems, as 10.x.x.x on Amazon EC2 is someone else. I have connected to the Amazon server with SSH agent forwarding a few times in the past, to access a Git repository on the university server. Any idea what could cause this? EDIT: Here's my /etc/resolv.conf # Generated by dhcpcd for interface eth0 search ec2.internal nameserver 172.16.0.23 Here's the output of dig myserver.myuniversity.ca.: ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34470 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 537586 IN A 10.43.x.x ;; Query time: 2 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:07:21 2012 ;; MSG SIZE rcvd: 60 Here's the expected output (on another Amazon server): ; <<>> DiG 9.8.1-P1 <<>> myserver.myuniversity.ca. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8045 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;myserver.myuniversity.ca. IN A ;; ANSWER SECTION: myserver.myuniversity.ca. 601733 IN A x.x.239.1 ;; Query time: 1 msec ;; SERVER: 172.16.0.23#53(172.16.0.23) ;; WHEN: Wed Nov 28 16:09:36 2012 ;; MSG SIZE rcvd: 60

    Read the article

  • Can't get basic web servers working on EC2 RedHat

    - by Yarin
    I'm trying to get some basic Python web servers (Flask, Tornado) turned up on the EC2. On the Amazon-flavored Linux AMI (Amazon Linux AMI 2013.03.1) they work no problem, but the same web servers installed on the RedHat quicklaunch AMI (Red Hat Enterprise Linux 6.4) don't work at all- All I get is connection failure errors when I try to browse to them. Both these servers share the same security group, with the relevant ports (5000, 5010) open, so I'm trying to understand why RedHat would not be not working.

    Read the article

  • IAM / AWS Access control via Windows Azure Active Directory

    - by Haroon
    I am trying to figure out how to configure IAM in Amazon AWS to use Windows Azure Active Directory. I found http://blogs.aws.amazon.com/security/post/Tx71TWXXJ3UI14/Enabling-Federation-to-AWS-using-Windows-Active-Directory-ADFS-and-SAML-2-0, however it is about configuring ADFS. WAAD supports SAML 2.0 http://azure.microsoft.com/en-us/documentation/articles/fundamentals-identity/ Has anyone figured it out yet?

    Read the article

  • Why do I have to run aptitude update twice to install Ruby?

    - by Willie Wheeler
    Summary. I have a fresh EC2 Precise 64-bit instance (ami-82fa58eb). After launching the instance, I want to install ruby1.9.1 (among others). This doesn't work: aptitude update && apt-get -o Dpkg::Options::="--force-confnew" --force-yes -fuy dist-upgrade && aptitude install -y ruby1.9.1 ruby1.9.1-dev make as Aptitude can't find the Ruby package. But this works: aptitude update && aptitude update && apt-get -o Dpkg::Options::="--force-confnew" --force-yes -fuy dist-upgrade && aptitude install -y ruby1.9.1 ruby1.9.1-dev make I would like to understand why I need to run aptitude update twice. Details. The first and second runs look pretty different. First run: Ign http://security.ubuntu.com precise-security InRelease Ign http://archive.ubuntu.com precise InRelease Get: 1 http://security.ubuntu.com precise-security Release.gpg [198 B] Ign http://archive.ubuntu.com precise-updates InRelease Get: 2 http://security.ubuntu.com precise-security Release [49.6 kB] Hit http://archive.ubuntu.com precise Release.gpg Get: 3 http://archive.ubuntu.com precise-updates Release.gpg [198 B] Hit http://archive.ubuntu.com precise Release Get: 4 http://security.ubuntu.com precise-security/main amd64 Packages [161 kB] Get: 5 http://archive.ubuntu.com precise-updates Release [49.6 kB] Get: 6 http://security.ubuntu.com precise-security/restricted amd64 Packages [3,969 B] Hit http://archive.ubuntu.com precise/main amd64 Packages Get: 7 http://security.ubuntu.com precise-security/universe amd64 Packages [43.8 kB] Hit http://archive.ubuntu.com precise/restricted amd64 Packages Hit http://archive.ubuntu.com precise/universe amd64 Packages Get: 8 http://security.ubuntu.com precise-security/multiverse amd64 Packages [2,180 B] Hit http://archive.ubuntu.com precise/multiverse amd64 Packages Get: 9 http://security.ubuntu.com precise-security/main i386 Packages [165 kB] Hit http://archive.ubuntu.com precise/main i386 Packages Hit http://archive.ubuntu.com precise/restricted i386 Packages Hit http://archive.ubuntu.com precise/universe i386 Packages Hit http://archive.ubuntu.com precise/multiverse i386 Packages Get: 10 http://security.ubuntu.com precise-security/restricted i386 Packages [3,968 B] Hit http://archive.ubuntu.com precise/main TranslationIndex Get: 11 http://security.ubuntu.com precise-security/universe i386 Packages [44.0 kB] Hit http://archive.ubuntu.com precise/multiverse TranslationIndex Get: 12 http://security.ubuntu.com precise-security/multiverse i386 Packages [2,369 B] Get: 13 http://security.ubuntu.com precise-security/main TranslationIndex [73 B] Hit http://archive.ubuntu.com precise/restricted TranslationIndex Get: 14 http://security.ubuntu.com precise-security/multiverse TranslationIndex [71 B] Hit http://archive.ubuntu.com precise/universe TranslationIndex Get: 15 http://security.ubuntu.com precise-security/restricted TranslationIndex [71 B] Get: 16 http://archive.ubuntu.com precise-updates/main amd64 Packages [382 kB] Get: 17 http://security.ubuntu.com precise-security/universe TranslationIndex [73 B] Get: 18 http://security.ubuntu.com precise-security/main Translation-en [76.5 kB] Get: 19 http://security.ubuntu.com precise-security/multiverse Translation-en [995 B] Get: 20 http://security.ubuntu.com precise-security/restricted Translation-en [978 B] Get: 21 http://security.ubuntu.com precise-security/universe Translation-en [27.2 kB] Get: 22 http://archive.ubuntu.com precise-updates/restricted amd64 Packages [6,755 B] Get: 23 http://archive.ubuntu.com precise-updates/universe amd64 Packages [129 kB] Get: 24 http://archive.ubuntu.com precise-updates/multiverse amd64 Packages [8,677 B] Get: 25 http://archive.ubuntu.com precise-updates/main i386 Packages [387 kB] Get: 26 http://archive.ubuntu.com precise-updates/restricted i386 Packages [6,732 B] Get: 27 http://archive.ubuntu.com precise-updates/universe i386 Packages [130 kB] Get: 28 http://archive.ubuntu.com precise-updates/multiverse i386 Packages [9,672 B] Get: 29 http://archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B] Get: 30 http://archive.ubuntu.com precise-updates/multiverse TranslationIndex [2,605 B] Get: 31 http://archive.ubuntu.com precise-updates/restricted TranslationIndex [2,461 B] Get: 32 http://archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B] Get: 33 http://archive.ubuntu.com precise/main Translation-en [726 kB] Get: 34 http://archive.ubuntu.com precise/multiverse Translation-en [93.4 kB] Get: 35 http://archive.ubuntu.com precise/restricted Translation-en [2,395 B] Get: 36 http://archive.ubuntu.com precise/universe Translation-en [3,341 kB] Get: 37 http://archive.ubuntu.com precise-updates/main Translation-en [188 kB] Get: 38 http://archive.ubuntu.com precise-updates/multiverse Translation-en [5,414 B] Get: 39 http://archive.ubuntu.com precise-updates/restricted Translation-en [1,484 B] Get: 40 http://archive.ubuntu.com precise-updates/universe Translation-en [77.3 kB] Ign http://archive.ubuntu.com precise/main Translation-en_US Ign http://archive.ubuntu.com precise/multiverse Translation-en_US Ign http://archive.ubuntu.com precise/restricted Translation-en_US Ign http://archive.ubuntu.com precise/universe Translation-en_US Fetched 6,137 kB in 11s (538 kB/s) Reading package lists... Second run: Ign http://us-east-1.ec2.archive.ubuntu.com precise InRelease Ign http://us-east-1.ec2.archive.ubuntu.com precise-updates InRelease Get: 1 http://us-east-1.ec2.archive.ubuntu.com precise Release.gpg [198 B] Get: 2 http://us-east-1.ec2.archive.ubuntu.com precise-updates Release.gpg [198 B] Ign http://security.ubuntu.com precise-security InRelease Get: 3 http://us-east-1.ec2.archive.ubuntu.com precise Release [49.6 kB] Get: 4 http://us-east-1.ec2.archive.ubuntu.com precise-updates Release [49.6 kB] Get: 5 http://us-east-1.ec2.archive.ubuntu.com precise/main Sources [934 kB] Hit http://security.ubuntu.com precise-security Release.gpg Hit http://security.ubuntu.com precise-security Release Get: 6 http://us-east-1.ec2.archive.ubuntu.com precise/universe Sources [5,019 kB] Get: 7 http://security.ubuntu.com precise-security/main Sources [42.8 kB] Get: 8 http://security.ubuntu.com precise-security/universe Sources [13.5 kB] Hit http://security.ubuntu.com precise-security/main amd64 Packages Hit http://security.ubuntu.com precise-security/universe amd64 Packages Hit http://security.ubuntu.com precise-security/main i386 Packages Get: 9 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB] Hit http://security.ubuntu.com precise-security/universe i386 Packages Get: 10 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB] Hit http://security.ubuntu.com precise-security/main TranslationIndex Hit http://security.ubuntu.com precise-security/universe TranslationIndex Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Get: 11 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB] Get: 12 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB] Get: 13 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B] Get: 14 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B] Get: 15 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [163 kB] Get: 16 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [50.8 kB] Get: 17 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [382 kB] Get: 18 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [129 kB] Get: 19 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [387 kB] Get: 20 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [129 kB] Get: 21 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B] Get: 22 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B] Get: 23 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB] Get: 24 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB] Get: 25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [188 kB] Get: 26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [77.1 kB] Fetched 23.8 MB in 23s (1,026 kB/s) Reading package lists... Note. My question is almost exactly the same as Running 'apt-get upgrade' on Amazon EC2 AMI twice in succession upgrades very different packages except that I'm seeing this issue with aptitude updates rather than apt-get upgrades.

    Read the article

  • Second Edition of Regular Expressions Cookbook Has Been Published

    - by Jan Goyvaerts
    %COOKBOOKFRAME% The first edition of Regular Expressions Cookbook was published in May of 2009. It quickly became a bestseller, briefly holding the #1 spot in computer books on Amazon.com. It also had staying power. The ebook version was O’Reilly’s top seller during the whole year of 2010. So it’s no surprise that our editor at O’Reilly soon contacted us for a second edition. With Steven and I always being very busy, those plans were delayed until finally both of us found the time to update the book. Work started in January. Today you can buy your own copy of the second edition of Regular Expressions Cookbook. O’Reilly’s online shop sells the eBook in DRM-free ePub, Mobi, and PDF formats for $39.99 and the print version for $49.99. These are the list prices for the eBook and the print book. If you’re looking for a discount and free shipping of the print book, you can pre-order on one of the various Amazon sites. Deliveries should start soon. The discount rates differ and are subject to change. Amazon will also pay me an affiliate commission if you use one of these links, which pretty much doubles the income I get from the book. Amazon.com. Free shipping to the USA. Amazon.co.uk. Free shipping to the UK and Ireland. Amazon.fr. Free shipping to France, Monaco, Luxembourg, and Belgium. Amazon.de. Free shipping to Germany, Austria, Switzerland, Luxembourg, Liechtenstein, Belgium, and The Netherlands. If you don’t want to wait for the print book to arrive, the Kindle edition is already available for instant delivery. The Kindle edition works on Amazon’s Kindle hardware, and on PCs via Amazon’s Kindle software (free download). Amazon.com Amazon.co.uk Amazon.fr Amazon.de I’ll blog more about the book in the coming days and weeks with details about what’s new in the second edition.

    Read the article

  • Ubuntu 12.04 cloud edition on Amazon - Apache2 - /etc

    - by jdog
    I have setup a web server on Amazon with 3 Virtual hosts. For some reason I can't get any of the sites going on it, they all show a 404 error. /var/log/apache2/error.log shows "File does not exist: /etc/apache2/htdocs" I have checked: a2ensite all my virtual hosts actually checked softlinks in sites-enabled access rights in /var/www to 777, in case user is not www-data grep -r htdocs /etc/apache2 (returns nothing) ports.conf has NameVirtualHost directive exactly matching Virtual Hosts What else could this be? ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost 107.20.169.163:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> sites-available/www.seleconlight.com <VirtualHost 107.20.169.163:80> ServerName www.seleconlight.com DocumentRoot /var/www/www.seleconlight.com CustomLog /var/log/apache2/www.seleconlight.com-access.log combined ErrorLog /var/log/apache2/www.seleconlight.com-error.log </VirtualHost>

    Read the article

  • SSH attack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • SSH attcack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • No GPS updates on Galaxy S3

    - by Valelik
    I'm developing a GPS tracker and it works like a charm. But a couple of weeks ago a customer of me (a trackage company) bought Samsung Galaxy S3 for his drivers. And since that we have really strange behaviour of my app. The app receives location updates from GPS receiver, but after some hours of work it doesn't receive any location updates. I have registered the app for onGpsStatusChanged() too and in this time onGpsStatusChanged() was called (I see that GPS receiver have 10-17 satellites!), but the method onLocationChanged() was not called! After the service restart (=re-registering of LocationListener) it works again. It is really strange. It seems that after some hours of work the GPS reciever is not in the mood for calling onLocationChanged() :) Any idea what may be wrong?

    Read the article

  • API to lookup product information by UPC?

    - by officespace672
    Is there an API that allows lookup of product information by UPC? I know that Amazon has the Product Advertising API but don't think it can be used for any purpose other than sending traffic to amazon.com as per their license agreement here. Specifically, my application would not have the principal purpose of advertising and marketing the Amazon Site and driving sales of products and services on the Amazon Site Does such an API exist that I can do anything I want with the data? UPDATE I would want to use the API for my application, not create create such an API.

    Read the article

  • Does Iphone supports background process or services?

    - by Fedrick
    Hi all, I am planning to develop a iphone client application to upload images from iphone gallery to amazon s3 using rest calls.so is there any library to run this application as a background process in iphone. Also is there any library to access the iphone photo gallery(Should be able access all the images,not only selected one like in UIImagePickerController) Thanks in advance for stack overflow masters for sharing their knowledge.....

    Read the article

  • Is there any sync services library for iphone

    - by Fedrick
    Hi all, i am developing a mobile client to sync images from iphone photo gallery to amazon s3,so is there any sync services libraries that can help me in this regard.. Also is there any library to access the iphone photo gallery,I just wanted to pick all photos, randomly, from the images stored on the device with no user interaction? Thanks in advance.......

    Read the article

  • Cache front end for the JetS3t API

    - by Joshua
    Storage via JetS3t REST API seems to be very slow. Is there a caching front end for the JetS3t API for avoiding a network hit on the fetch calls [link text][1] [1]: http://jets3t.s3.amazonaws.com/api/org/jets3t/service/S3Service.html#getObject(org.jets3t.service.model.S3Bucket, java.lang.String, java.util.Calendar, java.util.Calendar, java.lang.String[], java.lang.String[], java.lang.Long, java.lang.Long)

    Read the article

  • Conditional action based on whether any file in a directory has a ctime newer than X

    - by jberryman
    I would like to run a backup job on a directory tree from a bash script if any of the files have been modified in the last 30 minutes. I think I can hack together something using find with the -ctime flag, but I'm sure there is a standard way to examine a directory for changes. I know that I can inspect the ctime of the top level directory to see if files were added, but I need to be able to see changes also. FWIW, I am using duplicity to backup directories to S3.

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • How can I debug a port/connectivity issue?

    - by rfw21
    I am running a simple WebSocket server on Amazon EC2 (Fedora Core). I've opened the relevant port using ec2-authorize, and checked that it's opened. Iptables is definitely not running. However I can't connect to the port from outside EC2. I've tried the following (my server is running on port 7000): telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (from within EC2: connects fine) nmap localhost (output includes line: 7000/tcp open afs3-fileserver) telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (this time from my local machine: I get "connection refused: Unable to connect to remote host") The strange thing is this: if I start Nginx on port 7000 then it works and I can connect from outside EC2! And the WebSocket server fails on port 80, where Nginx works fine. To me this suggests a problem with the WebSocket server, BUT I can connect to it successfully from within EC2. (And it works fine on a different VPS account). How can I debug this further? If anybody can stop me tearing my hair out, I'd be very grateful indeed :)

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Django Upload form to S3 img and form validation

    - by citadelgrad
    I'm fairly new to both Django and Python. This is my first time using forms and upload files with django. I can get the uploads and saves to the database to work fine but it fails to valid email or check if the users selected a file to upload. I've spent a lot of time reading documentation trying to figure this out. Thanks! views.py def submit_photo(request): if request.method == 'POST': def store_in_s3(filename, content): conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) bucket = conn.create_bucket(AWS_STORAGE_BUCKET_NAME) mime = mimetypes.guess_type(filename)[0] k = Key(bucket) k.key = filename k.set_metadata("Content-Type", mime) k.set_contents_from_file(content) k.set_acl('public-read') if imghdr.what(request.FILES['image_url']): qw = request.FILES['image_url'] filename = qw.name image = filename content = qw.file url = "http://bpd-public.s3.amazonaws.com/" + image data = {image_url : url, user_email : request.POST['user_email'], user_twittername : request.POST['user_twittername'], user_website : request.POST['user_website'], user_desc : request.POST['user_desc']} s = BeerPhotos(data) if s.is_valid(): #import pdb; pdb.set_trace() s.save() store_in_s3(filename, content) return HttpResponseRedirect(reverse('photos.views.thanks')) return s.errors else: return errors else: form = BeerPhotoForm() return render_to_response('photos/submit_photos.html', locals(),context_instance=RequestContext(request) forms.py class BeerPhotoForm(forms.Form): image_url = forms.ImageField(widget=forms.FileInput, required=True,label='Beer',help_text='Select a image of no more than 2MB.') user_email = forms.EmailField(required=True,help_text='Please type a valid e-mail address.') user_twittername = forms.CharField() user_website = forms.URLField(max_length=128,) user_desc = forms.CharField(required=True,widget=forms.Textarea,label='Description',) template.html <div id="stylized" class="myform"> <form action="." method="post" enctype="multipart/form-data" width="450px"> <h1>Photo Submission</h1> {% for field in form %} {{ field.errors }} {{ field.label_tag }} {{ field }} {% endfor %} <label><span>Click here</span></label> <input type="submit" class="greenbutton" value="Submit your Photo" /> </form> </div>

    Read the article

  • Does cloud storage replicate the data over many datacenters if so it means i benefit content delive

    - by Berkay
    Let's assume that i want to use cloud storage service from one of the cloud storage provider, i got X gb structured and unstructured data and i will use this data as my contents of my interactive web page. And now i have some doubts about this point.I have many users and they are all visiting my web page from various countries.To be more specific first; does my data stored only of the Cloud Storage data center ? or Does my data replicated over many data centers of my provider? second if so; how can i benefit from content delivery network? (matching and placing users’ content nearest storage data centers)

    Read the article

  • Is there the equivalent of cloud computing for modems?

    - by morpheous
    I asked this question on SF, and someone recommended that I ask it here - (I don't think I have enough points to move a question from SF to SO - and in any case, I don't know how to do it - so here is the question again): I am interested in the concept of PAAS (platform as a service). However, all talk about SAAS/PAAS seems to focus on only the computer itself - not its peripherals. Is it possible to 'outsource' modems as a resource - so that an app running remotely can pump data to a modem in the cloud? As a bit of background to the question, a group of us are thinking of starting a company that offers similar services to companies like twilio etc - but I want to 'outsource' both the computing hardware (thats PAAS - the easy bit) and the modems (thats what I cant seem to find any info on). Does anyone know if modems can be bundled as part of a PAAS service? - alternatively, is there a way that an application running on one computer can communicate (i.e. pump data) to a remote modem residing on another machine?. I assume I can come up with some protocol over UDP or TCP - but there is no point reinventing the wheel - if such a protocol like that already exists (or if it some open source software allows one to do this). Any suggestions on how to solve this problem?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >