Search Results

Search found 24427 results on 978 pages for 'ec2 api tools'.

Page 12/978 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Port 80 not accessible Amazon ec2

    - by Jasper
    I have started a Amazon EC2 instance (Linux Redhat)... And Apache as well. But when i try: http://MyPublicHostName I get no response. I have ensured that my Security Group allows access to port 80. I can reach port 22 for sure, as i am logged into the instance via ssh. Within the Amazon EC2 Linux Instance when i do: $ wget http://localhost i do get a response. This confirms Apache and port 80 is indeed running fine. Since Amazon starts instances in VPC, do i have to do anything there... Infact i cannot even ping the instance, although i can ssh to it! Any advice? EDIT: Note that i had edited /etc/hosts file earlier to make 389-ds (ldap) installation work. My /etc/hosts file looks like this(IP addresses as shown as w.x.y.z ) 127.0.0.1   localhost.localdomain localhost w.x.y.z   ip-w-x-y-z.us-west-1.compute.internal w.x.y.z   ip-w-x-y-z.localdomain

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • Is there an SO API which can fetch all Questions & Answers for a particluar Keywords

    - by user4203
    I am looking for an API which helps in fetching all the Questions & Answers from SO and other Stack Exchange sites only on a particular "keyword". Later using XML RPC these questions will be posted as blog post and answers to this post's answers. Just wondering whether it's possible with an API. One of my friend suggested that we should Scrape but i don't want screen scraping instead i am looking for API requests which should handle this.

    Read the article

  • Should I use both WCF and ASP.NET Web API

    - by Mithir
    We already have a WCF API with basichttpbinding. Some of the calls have complex objects in both the response and request. We need to add RESTful abilities to the API. at first I tried adding a webHttp endpoint, but I got At most one body parameter can be serialized without wrapper elements If I made it Wrapped it wasn't pure as I need it to be. I got to read this, and this (which states "ASP.NET Web API is the new way to build RESTful service on .NET"). So my question is, should I make 2 APIs(2 different projects)? one for SOAP with WCF and one RESTful with ASP.NET Web API? is there anything wrong architecturally speaking with this approach?

    Read the article

  • HP openview servicedesk: looking for api information ?

    - by Zagorulkin Dmitry
    Good day folks. I am very confused in this situation. I need to implement system which will be based on HP open view service desk 4.5 api. But this system are reached the end of supporting period. On oficial site no information available I am looking an information about this API(articles, samples etc). Now i have only web-api.jar and javadoc. Methods in javadoc is bad documented. If you have any info, please share it with me. Thanks. Second question: there are methods for api(with huge amount of methods) understanding if it not documented or information is not available? PS:If it question is not belong here i will delete it.

    Read the article

  • How should an API use http basic authentication

    - by user1626384
    When an API requires that a client authenticates to it, i've seen two different scenarios used and I am wondering which case I should use for my situation. Example 1. An API is offered by a company to allow third parties to authenticate with a token and secret using HTTP Basic. Example 2. An API accepts a username and password via HTTP Basic to authenticate an end user. Generally they get a token back for future requests. My Setup: I will have an JSON API that I use as my backend for a mobile and web app. It seems like good practice for both the mobile and web app to send along a token and secret so only these two apps can access the API blocking any other third party. But the mobile and web app allow users to login and submit posts, view their data, etc. So I would want them to login via HTTP Basic as well on each request. Do I somehow use a combination of both these methods or only send the end user credentials (username and token) on each request? If I only send the end user credentials, do I store them in a cookie on the client?

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • Deprecate a web API: Best Practices?

    - by TheLQ
    Eventually you need to depreciate parts of your public web API. However I'm confused on what would be the best way to do it. If you have a large 3rd party app base just yanking old versions of the API seems like the wrong way to do it as almost all apps would fail overnight. However you can't keep ancient web api's available forever as it might be outdated or there are significant changes that make working with it impossible. What are some best practices for deprecating old web api's?

    Read the article

  • Using paypal to process credit cards in Sweden through an API [on hold]

    - by Mastikator
    I'm looking for a Paypal API that lets me process credit cards to make payments without being redirected to a paypal site and without enforcing consumers to use their paypal account. And it needs to work in Sweden. The ones I've looked at (dodirectpayment, expresscheckout, paypalpro gateway) and none of them have let me process credit cards in Sweden via an API that doesn't force the user to visit the paypal login site. I have a form on my webpage that the user types their credit card number, ccv2, expiration, name, address, etc. I need an API that works in Sweden that simply processes the request, and it has to be without the step of being redirected into a paypal website. The ones that I have found only worked in a select few countries, is there an international solution? I've already spent over 12 work hours just looking for an API that meets my requirements.

    Read the article

  • API always returns JSONObject or JSONArray Best practices

    - by Michael Laffargue
    I'm making an API that will return data in JSON. I also wanted on client side to make an utility class to call this API. Something like : JSONObject sendGetRequest(Url url); JSONObject sendPostRequest(Url url, HashMap postData); However sometimes the API send back array of object [{id:1},{id:2}] I now got two choices (): Make the method test for JSONArray or JSONObject and send back an Object that I will have to cast in the caller Make a method that returns JSONObject and one for JSONArray (like sendGetRequestAndReturnAsJSONArray) Make the server always send Arrays even for one element Make the server always send Objects wrapping my Array I going for the two last methods since I think it would be a good thing to force the API to send consistent type of data. But what would be the best practice (if one exist). Always send arrays? or always send objects?

    Read the article

  • IE 11 Developer Tools - changing console target to a different frameset or iframe

    - by vladimirl
    Originally posted on: http://geekswithblogs.net/vladimirl/archive/2013/10/25/ie-11-developer-tools---changing-console-target-to-a.aspxTo change current console iframe/frameset type this into console command line where "contentIFrame" in the iframe/frameset name (there should not be quotes around the iframe name):console.cd(contentIFrame);To return to the top level window, use cd() with no argument:console.cd();It took me some time to find out that this was possible in IE 11 Developer tools. Everything is so much easier in Chrome. No drama. Sometimes I feel that I hate IE more and more. Reference (http://msdn.microsoft.com/en-us/library/ie/dn255006(v=vs.85).aspx#console_in):All script entered in the command line executes in the global scope of the currently selected window. If your webpage is built with a frameset or iframes, those frames load their own documents in their own windows.To target the window of a frameset frame or an iframe, use the cd() command, with the frame/iframe's name or ID attribute as the argument. For example, you have a frame with the name microsoftFrame and you're loading the Microsoft homepage in it.JavaScriptcd(microsoftFrame); Current window: www.microsoft.com/en-us/default.aspx Important  Note that there were no quotes around the name of the frame. Only pass the unquoted name or ID value as the parameter.To return to the top level window, use cd() with no argument.

    Read the article

  • Weird entry for robots.txt on a Naked Domain in Google Webmaster Tools

    - by Metalshark
    We own a .co.uk address and use an Internet hosting company that has made mistakes around DNS in the past. Our main site is hosted on www. and their reluctance to allow editing of AAAA records on-line means our naked domain does not resolve. Currently when we attempt to reach the naked version there is no entry for the browser to go to and it displays an unreachable page (nslookup just says Name: name of domain with no further entries such as an IP or Canonical Name). We recently added the relevant TXT records to verify us to view both the www. version and the naked version of the domain in Google Webmaster Tools (in anticipation of the requests to our Internet host coming to fruition). Imagine our shock when double checking the Site configuration Crawler access and finding a (admittedly failing) robots.txt with a dynamically generated HTML page (full of crude pop-up JavaScript) with references to 3 of our most prominent competitors. What could cause this to happen? As we are in the UK I am assuming some DNS server is serving Google bad information. We are going to contact the Internet hosting company to fix our A and AAAA records once and for all, then check that they work in the US (using something like OpenDNS). Should we be doing more though, for instance informing Google (through Webmaster Tools) that we are now aware there is something currently wrong with our naked domain? UPDATE: We have fixed our A records (not AAAA) and that has resolved the issue. But if there are further actions we should take for effectively having a parking page hosted on our active visitor-heavy, SEO-rich domain that advertised our competitors to US visitors, what would they be?

    Read the article

  • Google Webmasters Tools strange 404 errors referred from same site

    - by Out of Control
    Starting about a month ago, I noticed a sudden increase in 404 errors in Webmasters Tools for one of my sites (over 1400 errors so far). All the errors are being referred from my own site to non existent pages. The 404 error URLs are all of the same format: URL: http://www.helloneighbour.com/save/1347208508000 The number on the end appears to be a timestamp followed by 3 zeros. The referring page, in this case is : Linked from http://www.helloneighbour.com/save/cmw-insurance-insurance-burnaby When I look at the source code of that page, or I use Webmaster tools to view the page as Google sees it, I can't find any link that comes close to what is above. I built the site, and I can't find any place that might be causing these false links either. The server logs (access and error) don't show Google or anyone else trying to access these links. I've marked all these pages as fixed, and waited a couple of weeks, only to find the errors come back again over the last few days. I'm wondering if anyone else has seen anything strange like this, or if someone might have a way for me to debug, replicate this error myself.

    Read the article

  • Webmaster tools, Duplicate Meta Descriptions, and Short Meta Descriptions [closed]

    - by Watsy91
    Possible Duplicate: Do meta keywords have any impact on ranking algorithms? I am fairly new to the whole Webmaster Tools concept. I have been looking at all the different options, such as crawl errors, HTML improvements etc... I have been looking at the Duplicate Meta Descriptions and Short Meta Descriptions, was wondering if anyone could suggest ideas on how to go about improving this. It seems that all the Duplicates are from the URL Title and the short description. It would seem to me that most people would have information regarding the page with the same keywords as their titles. Heres an example of one: These are the ultimate hampers in taste, quality and value. Amongst this range of luxury hampers ar /food-hampers/food-hampers-over-100.html /thank-you-gifts/large-gifts-over-100.html To get to the point I just want to know do these things really matter? Would they have a real consequence on my sites rankings? My sites have been falling down the rankings since early this year and I have really started to look at Google Analytics and Webmaster tools to try and indicate certain problems. I have researched the Internet and it seems that some people don't bother and others do!! I know that Stackoverflow has 100s+ people who have went through the above and I would really appricate if they could give me some tips etc. Or in the END does it really matter?? :D

    Read the article

  • What GUI tools are available for which DVCS?

    - by Macneil
    When I worked at Sun, we used a DVC system called Forte SCCS/Teamware, which used the old SCCS file format, but was a true distributed source code revision control system. One nice feature is that it had strong GUI support: You could bringover and putback changes by simply clicking and dragging. It would draw trees/graphs showing how workspaces relate to each other. You also could have a graph view to display a single file's complete history, which might have had several branches and merges. Allowing you to compare any two points. It also had a strong visual merge tool, to let you accept changes from one of two conflicting files. Naturally, many of the current DVCSs have command line support for these operations, but I'm looking for GUI support in order to use this in a lower-level undergraduate course I'll be teaching. I'm not saying the Forte Teamware solution was perfect, but it did seem to be ahead of the curve. Unfortunately, it's not a viable option to use for my class. Question: What support do the current DVCSs have with regards to GUIs? Do any of them work on Windows, and not just Linux? Are they "ready for prime-time" or still works in progress? Are these standalone or built as plug-ins, e.g., for Eclipse? Note: To help keep this discussion focused I'm only interested in GUI tools. And not a meta-discussion if GUI tools should be used in teaching.

    Read the article

  • Fake links cause crawl error in Google Webmaster Tools

    - by Itai
    Google reported Crawl Errors last week on my largest site though Webmaster Tools. Here is the message: Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages. The Crawl Errors list is now full of hundreds of fake links like these causing 16,519 errors so far: Note that my site does not even have a search.html and is not related to any of the terms shown in the above image. Inspecting sources for one of those links, I can see this is not simply an isolated source but a concerted effort: Each of the links has a few to a dozen sources all from different, seemingly unrelated sites. It is completely baffling as to why would someone to spending effort doing this. What are they hoping to achieve? Is this an attack? Most importantly: Does this have a negative effect on my side? Could it negatively impact my ranking? If so, what to do about it? The few linking pages I looked at are full of thousands of links to tons of sites and have no contact information and do not seem like the kind of people who would simply stop if asked nicely! According to Google Webmaster Tools, these errors have appeared in a span of 11 days. No crawl errors were being reported previously.

    Read the article

  • Web master tools is throwing out 404 errors on link not on page

    - by plantify
    Webmaster tools is showing thousands of 404 errors, where pages on the site are referring to another incorrect url. For example, URL not found www.plantify.co.uk/shop/=, linked from http://www.plantify.co.uk/shop/gift-voucher and http://www.plantify.co.uk/shop/special-plant-offers. I obviously have checked the source and cannot find any references to this link on any page. The only consistent issue is that it only seems to report this error on pages with two section i.e. www.plantify.co.uk/shop does not report any error whilst all pages with www.plantify.co.uk/shop/xxx (where xxx can be several different pages such as gift-voucher) all report this. I cannot seem to duplicate this error. I have run a link checker (we use Screaming Frog) and it does not report this error. I have fetched these pages as a bot, and these do not report this error. I am at a total loss. I cannot even duplicate the issue, but it is most definitely an issue, as Webmaster Tools is reporting new errors every day. Is this perhaps google bot doing its own thing?

    Read the article

  • How to create markers on a google local search api?

    - by cheesebunz
    As the question says, i do not want to use it from the API, and instead combine it on my code, but i can't seem to implement it with the code i have now. the markers do not come out and the search completely disappears if i try implementing with the code. This is a section of my codings : http://www.mediafire.com/?0minqxgwzmx

    Read the article

  • When is a Google Maps API key required?

    - by Thomas
    Recently Google changed it's policy on the use API keys. You're now supposed to no longer need an API key to place Google Maps on your website. And this worked perfectly. But now I have this map (without API key) running on my localhost, which works fine. But as soon as I place it online, I get a popup saying that I need another API key. And on another page on that website, Google Maps does work. Could it maybe have something to do with that the map that doesn't work have a lot (30+) of markers on it? Actually using an API key wouldn't be a very nice solution to me, as this is part of a Wordpress plugin used on many websites.

    Read the article

  • How to install ceph on EC2 Amazon Linux AMI

    - by takaomag
    I want to test Ceph (a distributed network storage and file system) on some EC2 hosts which is derived from Amazon Linux AMI (amzn-ami-2011.09.2.x86_64-ebs). The kernel version is 3.2 and btrfs is enabled. But kernel config options related to Ceph (CONFIG_CEPH_FS and CONFIG_BLK_DEV_RBD) seems to be disabled. I have to make a new kernel and register it to amazon ? Or, does someone know more easy way ?

    Read the article

  • ssh timeout when connecting to ec2 instances

    - by Johnny Wong
    After there has been a timeout on my ssh connection (e.g., left the ssh session running and closed my laptop), it is very difficult to re-login with ssh. I keep getting an ssh timeout error. I tried removing the hostname from the known_host file (per a friend's suggestion) which sometimes helps, but other times doesn't -- and I dont know why This is in connection to accessing my EC2 instance on Amazon. This is driving me nuts -- any help, much appreciated.

    Read the article

  • AWS EC2 & WordPress / WooCommerce, Product pages dragging

    - by Stephen Harman
    http://ec2-54-243-161-225.compute-1.amazonaws.com/shop/product-category/dark-horse/ If you click on any of the products on this page you'll notice it either takes a minute or more to load or it doesn't load at all. I have about 11,000 products in the database each with about 3 images attached to them, the database is about 108mbs in size. Any suggestions on fixing this speed issue? Thank you in advance!

    Read the article

  • How to run Django 1.3/1.4 on uWSGI on nginx on EC2 (Apache2 works)

    - by Tadeck
    I am posting a question on behalf of my administrator. Basically he wants to set up Django app (made on Django 1.3, but will be moving to Django 1.4, so it should not really matter which one of these two will work, I hope) on WSGI on nginx, installed on Amazon EC2. The app runs correctly when Django's development server is used (with ./manage.py runserver 0.0.0.0:8080 for example), also Apache works correctly. The only problem is with nginx and it looks there is something else wrong with nginx / WSGI or Django configuration. His description is as follows: Server has been configured according to many tutorials, but unfortunately Nginx and uWSGI still do not work with application. ProjectName.py: import os, sys, wsgi os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ProjectName.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() I run uWSGI by comand: uwsgi -x /etc/uwsgi/apps-enabled/projectname.xml XML file: <uwsgi> <chdir>/home/projectname</chdir> <pythonpath>/usr/local/lib/python2.7</pythonpath> <socket>127.0.0.1:8001</socket> <daemonize>/var/log/uwsgi/proJectname.log</daemonize> <processes>1</processes> <uid>33</uid> <gid>33</gid> <enable-threads/> <master/> <vacuum/> <harakiri>120</harakiri> <max-requests>5000</max-requests> <vhost/> </uwsgi> In logs from uWSGI: *** no app loaded. going in full dynamic mode *** In logs from Nginx: XXX.com [pid: XXX|app: -1|req: -1/1] XXX.XXX.XXX.XXX () {48 vars in 989 bytes} [Date] GET / => generated 46 bytes in 77 m secs (HTTP/1.1 500) 2 headers in 63 bytes (0 switches on core 0) added /usr/lib/python2.7/ to pythonpath. Traceback (most recent call last): File "./ProjectName.py", line 26, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named wsgi unable to load app SCRIPT_NAME=XXX.com| Example tutorials that were used: http://projects.unbit.it/uwsgi/wiki/RunOnNginx https://docs.djangoproject.com/en/1.4/howto/deployment/wsgi/ Do you have any idea what has been done incorrectly, or what should be done to make Django work on uWSGI on nginx on EC2?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >