Search Results

Search found 2978 results on 120 pages for 'amazon aws'.

Page 45/120 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Very High Network out in ec2 instance

    - by Jatin
    I launched an ubuntu-14.04-64bit instance in Amazon EC2 two days back. And I started Tomcat 7.0.54 in that instance and deployed my application war files. It has no other software installed other than tomcat and the default ones. In the past 2 days, its shows 858 GB of Data Transfer(Network Out) from that instance. I have attached a graph of Amazon CloudWatch Metric "Network Out" My application does not do any data download/upload. Its a Java Spring application and the front end is in HTML&Javascript. My application traffic was very low (less than 20 hits) in those 2 days. Is there a way to find out why these data transfers happened and also to find what data has been transferred. If you can see in graph, network out was 20gb per minute. Some more info: Network in was negligible CPU Utilization was very high Everything else was low

    Read the article

  • how do i install intermediate certificate

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on amazon load balancer however when i check the ssl with site test tool (http://www.networking4all.com/en/support/tools/site+check/), i get the following error Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. i converted crt file to pem using these command from this tutorial openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM during setting up of amazon load balancer only option i left out was certificate chain (pem encoded) however this was optional. could this be cause of my issue? and if so i how do i create certificate chain? for the last question i have tried googling however i'm getting more confused than before. please help many thanks in advance. UPDATE @all thanks for the helpful advice. if you make request to verisign they will give you a certificate chain however this chain includes public crt, intermediate crt and root crt. make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your amazon load balancer. if you are making https request from an android app then above instruction may not work for older android os such as 2.1 and 2.2. to make it work on older android os [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR657&actp=LIST&viewlocale=en_US]. on this link click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server". copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link [https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1409] if you are using geo trust certificates then solution is much the same for android devices however you need to copy and past their intermediate certs for android. PS: sorry for the long urls however "new users can only post a maximum of two hyperlinks"

    Read the article

  • Backup software for Ubuntu - which one?

    - by Industrial
    Hi everybody, I have spent some time testing out different backup solutions for my small home office during the last weeks, but still haven't found anything that have been working out too well yet. We can definitely work with a non-GUI script if that's what it takes, if only the requirements are fulfilled: Upload to Amazon S3 Europe. We get unbelievable slow uploading speed to US, so uploading 400+ GB of data will not be happening anytime this year... Incremental backups - only changed files shall be uploaded or we will have a big bill from Amazon in the end of each month.. Files should not be uploaded in one big per-folder archive. This is not efficient at all, since if we change one file in a subfolder, a huge two-digit GB sized file would have to be uploaded during next backup. Not good for economy again, or traffic overhead on our internet connection. What options are available to us? Thanks!

    Read the article

  • EC2 Image to start

    - by HD.
    I'm starting to test EC2 for a couple of new projects. I need to choose an AMI (Amazon Machine Image) and Amazon offered me as first option Fedora Core 8, which is a very old version of one of my favorites distributions. There is a lot of choices, but it's not clear for me which one is the better option. I have my own reasons in order to choice a distro and a version when I need to install a new server but I don't know If I can apply the same for EC2. I know there is a beta for RHEL, how stable is this beta?, How can I choose between all the CentOS AMIs in the list? So this is my question: Do you recommend an AMI to start with EC2? Thanks

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Sync local files to S3 similar to Robocopy

    - by Yuck
    I am looking for a way to synchronize an entire local folder structure to Amazon S3, similar to how one might synchronize two folders using Robocopy. Whatever solution I come up with needs to be scheduled to run periodically from the Windows Task Scheduler. So anything that requires a GUI to perform the synchronization is not a viable solution. Standalone Windows .EXE command line utility for Amazon S3 & EC2 looked promising, but seems to have been abandoned and would not work when I tried to use it. Possibly a difference in the way that security is handled now compared to that software's most recent release.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • How does Amazon's Statistically Improbable Phrases work?

    - by ??iu
    How does something like Statistically Improbable Phrases work? According to amazon: Amazon.com's Statistically Improbable Phrases, or "SIPs", are the most distinctive phrases in the text of books in the Search Inside!™ program. To identify SIPs, our computers scan the text of all books in the Search Inside! program. If they find a phrase that occurs a large number of times in a particular book relative to all Search Inside! books, that phrase is a SIP in that book. SIPs are not necessarily improbable within a particular book, but they are improbable relative to all books in Search Inside!. For example, most SIPs for a book on taxes are tax related. But because we display SIPs in order of their improbability score, the first SIPs will be on tax topics that this book mentions more often than other tax books. For works of fiction, SIPs tend to be distinctive word combinations that often hint at important plot elements. For instance, for Joel's first book, the SIPs are: leaky abstractions, antialiased text, own dog food, bug count, daily builds, bug database, software schedules One interesting complication is that these are phrases of either 2 or 3 words. This makes things a little more interesting because these phrases can overlap with or contain each other.

    Read the article

  • Amazone API ItemSearch returns (400) Bad Request.

    - by BuzzBubba
    I'm using a simple example from Amazon documentation for ItemSearch and I get a strange error: "The remote server returned an unexpected response: (400) Bad Request." This is the code: public static void Main() { //Remember to create an instance of the amazon service, including you Access ID. AWSECommerceServicePortTypeClient service = new AWSECommerceServicePortTypeClient(new BasicHttpBinding(), new EndpointAddress( "http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); AWSECommerceServicePortTypeClient client = new AWSECommerceServicePortTypeClient( new BasicHttpBinding(), new EndpointAddress("http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); // prepare an ItemSearch request ItemSearchRequest request = new ItemSearchRequest(); request.SearchIndex = "Books"; request.Title = "Harry+Potter"; request.ResponseGroup = new string[] { "Small" }; ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = accessKeyId; // issue the ItemSearch request try { ItemSearchResponse response = client.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Console.WriteLine(item.ItemAttributes.Title); } } catch(Exception e) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(e.Message); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine("Press any key to quit..."); Clipboard.SetText(e.Message); } Console.ReadKey(); What is wrong?

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • "Ubuntu : un logiciel espion" pour Richard Stallman, qui s'insurge contre l'intégration de la recherche Amazon dans l'OS

    « Ubuntu : un logiciel espion » pour Richard Stallman le père de GNU estime que l'intégration de la recherche Amazon dans l'OS est préjudiciable au libre La version la plus récente d'Ubuntu (12.10 Quetzal Quantal) intègre une fonctionnalité polémique permettant d'afficher des suggestions de produits à acheter sur Amazon aux utilisateurs. Concrètement, lorsque l'utilisateur lance une recherche d'un fichier, une application, etc. (en local ou sur le Web) à partir de son bureau, des liens de suggestions Amazon vers des sujets rattachés aux mots-clés saisis apparaissent avec les résultats. Bien que cette fonctionnalité soit un moyen pour Canonical de financer le projet, elle e...

    Read the article

  • Automated incremental backups from Plesk on Centos to Amazon S3

    - by ChrisS
    Hi, I've done a far bit of research on this via Google and there seems to be quite a few ways of possibly doing this. I'm looking to incrementally backup new and updated files in two directories on my Plesk run Centos 5.2 server: /backups and /var/www/vhosts (preferable only httdocs within each vhost) Has anyone got some great feedback from using the various solutions - seems to be various Java, Perl and Ruby based solutions out there. Many thanks, Chris

    Read the article

  • Start jade via batch script as service on windows server 2012 amazon ec2 instance

    - by E. Lüders
    I would like to start the Jade agent platform with a batch script on an windows server ec2 instance as a service. The reason for this is, that I want to start jade automatically at system startup. For creating the service I use nssm. However so far its working fine and the service is created. When I want to start the service I get an error message: Windows could not start the NAME service on Local Computer. The service did not return an error... My batch file contains only one line: java jade.Boot -platform-id P%random% If i execute the script via cmd it works fine. Anybody got an idea why this is not working if I start the batch script as a service?

    Read the article

  • Les e-Redears sont complémentaires des tablettes PC pour Amazon, qui révèle que le Kindle est sa meilleure vente de tous les temps

    Les e-Redears sont complémentaires des tablettes PC Pour Amazon, qui révèle que sa meilleure vente de tous les temps est le Kindle Lorsque nous parlions d'un Noël Geek, nous ne croyions pas si bien dire. Pour la plus grande joie d'Amazon, la fête a été l'occasion pour de nombreux utilisateurs de se procurer un Kindle, son e-reader maison. Et par « nombreux », il faut comprendre très nombreux puisque Amazon révèle que son lecteur électronique est devenu la meilleure vente de tous les temps de la plateforme. Pour la petite histoire, le Kindle détronne le dernier tome de Harry Potter.

    Read the article

  • Amazon Web Services : mise à jour de l'environnement Linux, avec les dernières versions de MySQL, Python, Ruby et le Kernel 3.2

    Amazon Web Services : mise à jour de l'environnement Linux avec les dernières versions de MySQL, Python, Ruby et le Kernel Linux 3.2 Amazon Web Services (AWS) vient de procéder à une mise à jour majeure d'Amazon Linux AMI. L'image du système d'exploitation Linux qui s'exécute sur la plateforme intègre désormais les versions les plus récentes de TomCat, MySQL, Python, GCC, Ruby, etc. Cette version a été construite avec pour objectif principal de permettre aux entreprises de migrer ou de rester sur les anciennes versions des outils. Ainsi, les organismes peuvent exécuter différentes versions majeures des applications et langages de programmation. Ceci permet au code qui s'appuie su...

    Read the article

  • Cloud : suppression des coûts de transferts de données pour les plates-formes Windows Azure et Amazon Web Services

    Cloud : suppression des coûts de transferts des données Pour les plates-formes Windows Azure et Amazon Web Services Amazon a annoncé la suppression des frais sur tous les transferts de données sur sa plate-forme Cloud Amazon Web Services (AWS). Cette suppression vise à convaincre certaines entreprises préoccupées par le coût des migrations vers le Cloud liés aux quantités importantes de données. Les utilisateurs pourront donc up-loader des petaoctes de données sans avoir à payer des frais supplémentaires. La décision est valable pour le transfert de données dans n'importe quelle région et pour n'importe quel service d'AWS. En plus de la suppression des couts de trans...

    Read the article

  • Postgres + OpenStreeMap Amazon snapshot

    - by user32425
    Hi, I am trying to start my postgres using /etc/init.d/postgresql-8.3 start but I got this * Starting PostgreSQL 8.3 database server * Error: The server must be started under the locale : which does not exist any more. ...fail! I tried the solution in http://ubuntuforums.org/archive/index.php/t-397005.html but it still does not work. How can I fix this? Thanks

    Read the article

  • SAP : HANA débarque sur AWS, l'éditeur continue d'imaginer des offres pour démocratiser sa base de données en mémoire

    SAP HANA débarque sur Amazon Web Services L'éditeur continue d'imaginer des offres pour démocratiser sa base de données en mémoire Pour bénéficier de ? ou tester ? la base de données « in-memory » de SAP, plus besoin d'une appliance ou d'un serveur. L'éditeur Allemand vient en effet de certifier HANA à l'usage des développeurs sur Amazon Web Services. « SAP HANA One est disponible sur le marketplace d'AWS », annonce SAP. « Il est maintenant possible de faire fonctionner cette puissante base de données en mémoire sur AWS EC2 pour $ 0,99 de l'heure ». [IMG]http://ftp-developpez.com/gordon-fowler/SAP%20HANA.png[/IMG] Pour rappel, ...

    Read the article

  • Why can I resolve this hostname but not a cname to this hostname?

    - by Joe Hopfgartner
    if a run dig against a hostname, i get the according cname, however i get an NXDOMAIN error (non existent domain). if i run dig against the cname i got, I can resolve it to an IP address successfully. It is reproduceable. On the system I am currently on it is always the case, on other systems it sometimes works and sometimes not, and on other systems it seems to work all the time. If i run using a nameserver i specify (for example googles public nameserver) i can successfully resolve the hostname. i would just blame the local system, but it seems i am not having the only one problems the 2nd domain (unrestrict.me) is hosted on amazon route 53 nameservers. the 1st one on another dns server which has proofen to be fully functional and reliable over the years. i once switchted with the other domain to amazon dns as well, everything seemed to work, also various dns health check tests reported fine, however i recieved a lot of support tickets that dns resolution would not work. is amazon just "bad" or am i doing something wrong? i did not tamper with the domain in any way on the local system (in case of caching or making a custom dns view or whatever...) joe@joe:~$ dig scorpion.premiumize.me ; <<>> DiG 9.8.1-P1 <<>> scorpion.premiumize.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 10222 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;scorpion.premiumize.me. IN A ;; ANSWER SECTION: scorpion.premiumize.me. 180 IN CNAME alpha.nue.scorpion.unrestrict.me. ;; Query time: 28 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Jun 18 10:28:39 2012 ;; MSG SIZE rcvd: 84 joe@joe:~$ dig alpha.nue.scorpion.unrestrict.me ; <<>> DiG 9.8.1-P1 <<>> alpha.nue.scorpion.unrestrict.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 25381 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;alpha.nue.scorpion.unrestrict.me. IN A ;; ANSWER SECTION: alpha.nue.scorpion.unrestrict.me. 300 IN A 78.46.25.130 ;; Query time: 48 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Jun 18 10:28:47 2012 ;; MSG SIZE rcvd: 66 joe@joe:~$

    Read the article

  • exportfs: internal: no supported addresses in nfs_client

    - by Brian
    I am trying to set up a NFS server on an AWS instance running SLES11. After installing nfs-utils, I tried to export a test share. Here is what my /etc/exports file looks like: /opt/share1 ec2-50-16-224-79.compute-1.amazonaws.com(rw,async) export -ar returns the following message: exportfs: internal: no supported addresses in nfs_client domU-12-31-38-04-7E-02.compute-1.internal:/opt/share1: No such file or directory Any idea what the no supported addresses error means? Thanks!

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • Why don't cfn-init logs get sent by rsyslog?

    - by Jon M
    I just signed up for Papertrail to aggregate logs from some AWS instances I'm setting up with CloudFormation::Init. I've followed the instructions and added *.* @logs.papertrailapp.com to the end of '/etc/rsyslog.conf'. Some logs are showing up on Papertrail, but notably the contents of '/var/log/cfn-init.log' never get there, and those are the ones I'm interested in right now. Have I set up rsyslog incorrectly? Or do the CloudFormation::Init scripts just not use syslog to write log information?

    Read the article

  • Raik vs Amazon SimpleDB

    - by Fedrick
    Hi, I am looking for an eventually consistent key value data store and i decided to choose between Amazon SimpleDB and Riak ,so can anyone share their valuable experiences comparing both . Thanks in advance Fedrick

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >