Search Results

Search found 24427 results on 978 pages for 'ec2 api tools'.

Page 42/978 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Can I use Google Maps API (Places API) in my iPhone app to find locations near me?

    - by Mark
    I have a couple of questions regarding using Google maps API, especially the Places API in my iPhone application. Can I use Places API in my iPhone app and still release the app as a paid app? Could I release my app as free if I am unable to use these APIs in a paid app? Is there an example for figuring out store locations around user's current location using Places API? For example if the user types "Groceries" in the app, I would like to show all the Store that sell groceries near the user's location. Thanks!

    Read the article

  • Can I use IP addresses to limit API access

    - by ed209
    I have a mini API that is only for an app I have built. The API service is on a separate domain to my app. I make jsonp calls to it and receive json in return. Therefore I only want my app to be able to access it. Can I just list a series of IP addresses for my app and allow them? Is there a better way to stop requests from anyone else to my API?

    Read the article

  • Obtaining XML from U.S. Postal Service (USPS) rate calculator API with PHP

    - by Chris F
    hoping somebody here can help me. I'm attempting to pull an XML page from the U.S. Postal Service (USPS) rate calculator, using PHP. Here is the code I am using (with my API login and password replaced of course): <? $api = "http://production.shippingapis.com/ShippingAPI.dll?API=RateV4&XML=<RateV4Request ". "USERID=\"MYUSERID\" PASSWORD=\"MYPASSWORD\"><Revision/><Package ID=\"1ST\">". "<Service>FIRST CLASS</Service><FirstClassMailType>PARCEL</FirstClassMailType>". "<ZipOrigination>12345</ZipOrigination><ZipDestination>54321</ZipDestination>". "<Pounds>0</Pounds><Ounces>9</Ounces><Container/><Size>REGULAR</Size></Package></RateV4Request>"; $xml_string = file_get_contents($api); $xml = simplexml_load_string($xml_string); ?> Pretty straightforward. However it never returns anything. I can paste the URL directly into my browser's address bar: http://production.shippingapis.com/ShippingAPI.dll?API=RateV4&XML=<RateV4RequestUSERID="MYUSERID" PASSWORD="MYPASSWORD"><Revision/><Package ID="1ST"><Service>FIRST CLASS</Service><FirstClassMailType>PARCEL</FirstClassMailType><ZipOrigination>12345</ZipOrigination><ZipDestination>54321</ZipDestination><Pounds>0</Pounds><Ounces>9</Ounces><Container/><Size>REGULAR</Size></Package></RateV4Request> And it returns the XML I need, so I know the URL is valid. But I cannot seem to capture it using PHP. Any help would be tremendously appreciated. Thanks in advance.

    Read the article

  • API Management Solutions

    - by Mike
    I'm currently building an API and am looking for a tool to allow me to monitor (in a GUI) and rate limit usage. I've come across a few enterprise solutions including: http://apigee.com/ http://mashery.com/ http://www.layer7tech.com/ http://www.3scale.net/ The Apigee enterprise plan is exactly what I'm looking for but plans start at $3000 / month which is out of my price range. The other solutions are all either to expense or do not provide the solution I'm looking for. This led me to look at some open source options including: http://apiaxle.com/ https://code.google.com/p/varnish-apikey/wiki/UsageManual Varnish seems like a fairly complete solution, however I would need to build a GUI to visualise the data. My final option would to build a solution from scratch using EventMachine and ruby. Any advice?

    Read the article

  • HD Video Capture Card w/ Good API?

    - by Sheep Slapper
    Does anyone here know of a good HD video capture card that has a good (comprehensive) API? I administer a few servers that do some video encoding right now, but when we make the switch to HD cameras, they won't be sufficient. In addition to this, the servers we have now are black boxes, closed to me except to start/stop the video capture device. I'd like to be able to roll my own, so we can better integrate it with our existing systems, but I know almost nothing about what kind of HD capture cards are out there, and if I can avoid spending money just to test their APIs that would rock. So does anyone have any experience with this? All our other software is in C#, and I'd like to set up the new servers with web interfaces to start/stop the capture (also in C#, using .NET 3.5 probably). I'm not sure how language specific these APIs would be, but that's what I'm working with just as a reference point. I appreciate any help the community can give!

    Read the article

  • Easy GUI way to auto scale EC2 and RDS: aws console, scalr, ylastic...?

    - by Zillo
    I am managing all my instances with the AWS Management Console (the GUI web console) but now I want to use Auto Scale and it seems that this can not be done with that console. Yes, there is CloudWatch but I can only create alarms (e-mail notifications), it seems that CouldWatch needs you to add the auto scale policy in some other place (by command line console?). I would like to use some easy GUI interface. Ylastic and Scalr seems to be a good option. Which one do you think is better? Regarding Scalr, is there any difference between the open source software Scalr and the service Scalr.net? I mean, is the GUI interface the same? I like the idea of the Scalr because I do not need to give my Secret Access Key to a third party (like in Ylastic or in Scalr.net) One question about the Scalr software, it has to be installed in the instances or it must be installed in another machine? Do I need to setup again all my security permissions, AMIs, snapshots, etc. or I can use AWS Management Console for everything and Scalr just to auto scale.

    Read the article

  • Windows Server 2008 Antivirus Software with an API

    - by Dave Jellison
    I'm looking for an Antivirus package that is compliant with Windows Server 2008. That's not the hard part. What I need is an API layer on the Antivirus that i can call from managed .net code. For example: I am developing an Asp.Net (C#) website that allows users to upload files to the web server which the web site resides on. We have full control of the server so there are no security/rights issues on the server. I need to be able to run the antivirus algorithm on the newly uploaded files without (hopefully) shelling out to a command-ilne version of the software. Does anyone know of such a package?

    Read the article

  • Need help with setting up MS-SQL on EC2.

    - by Hareem Haque
    I have a large MS-SQL database that i need to send to the aws cloud. The issue is how do i persist my sql data and how to setup MS-SQL cluster using windows AMI. The real issue is that for replication i need to use the private ip's of the instances. However, these ip's are always dynamic and will change on server launch. Any ideas on how i can get rid of this problem. I really appreciate your help Best Regards Hareem Haque

    Read the article

  • nike ocr twitter api rsvp software not working

    - by daniel
    I recently purchased an OCR program that reads through Nike's Twitter page and takes a circled word out of a picture and sends it back in a Twitter DM. However, it recently stopped working, probably because the person whom I bought it from deleted the server and hasn't responded to my emails. It is a Twitter API app. It still goes through the tweets and I can enter usernames, but it does not read the image anymore nor send a DM. If anyone knows how I can go about fixing this, it would be a huge help. Here is the program website: http://rsvpocr.nanoworking.com

    Read the article

  • V4L2 and ALSA: Kernel SPI or User API?

    - by pnongrata
    I'm trying to understand what Video for linux and ALSA are (exactly), and I can't discern whether they're APIs for Linux application to use (the userspace) or if they are backend services that are only available to the Linux kernel (sort of a kernalspace SPI). Or, if they are something entirely different. On one hand, those articles make it sound like its an API for applications to use. However, on the V4L2 page it has a section title Software supporting Video4Linux... So is V4L2 a library that applications use, or is it a module that "snaps into" the kernel? I'm so coonfused, thanks in advance.

    Read the article

  • How do you persist installed software & configurations on an Amazon EC2 instance?

    - by Richard
    I've gotten a base Debian AMI up and running and now I need to know the best way to maintain it. I've ran the updates (aptitude update/upgrade) and installed/configured my software (Apache, Ruby, etc.) but if I reboot the instance or start a new one I'll have to do all this work over again. How do you persist these types of things over a reboot? Do you build a new AMI every time you adjust some tiny piece of the system? Or is there some way to feed it a script on startup that configures it in "real-time"? I know I could go all the way with a Reductive Labs Puppet style setup but that's a bit too much for my needs right now (1-2 servers). Any best practices on this? Update: I found a bit of information on using User-Data to run scripts at instance boot time.

    Read the article

  • Am I using too much memory? (Rails on EC2 with Resque)

    - by Stpn
    I am looking at the memory usage of the Rails application (it uses background processes via Resque) and since the common answer to the question, "how many workers is too many" was "test and see", I ran some memory commands and wonder if someone can help figuring if the memory usage is high enough already, or I can still add some extra workers.. so (this is all under the maximum load): $ free -t -m total used free shared buffers cached Mem: 1756 1532 223 0 12 229 -/+ buffers/cache: 1291 464 Swap: 895 10 885 Total: 2652 1543 1108 $ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 10588 156172 13400 326476 1 6 4 0 5 4 1 0 99 0 If there is any extra info I can provide to help answer this, I would be happy to do so. If the question is strange in some way, please let me know I'd be glad to fix etc..

    Read the article

  • EC2 Amazon Linux AMI MySQL CPU @ 62% When Idle?

    - by Jeff
    I am running MySQL on an Amazon Linux AMI. There is nothing connected to it. There are no connections and no other applications running that use MySQL. It is completely idle, but yet, top is reporting that mysql is using 62% of the CPU? Why is this happening and how do I fix it? Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 97.8%id, 0.0%wa, 0.0%hi, 0.0%si, 1.7%st Mem: 1738504k total, 390708k used, 1347796k free, 56888k buffers Swap: 917500k total, 0k used, 917500k free, 229804k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2959 mysql 20 0 466m 39m 5244 S 62.2 2.3 4:00.67 mysqld 1 root 20 0 19252 1504 1212 S 0.0 0.1 0:00.20 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd There are no connections... mysql> show processlist; +----+------+-----------+------+---------+------+-------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+------+-----------+------+---------+------+-------+------------------+ | 5 | root | localhost | NULL | Query | 0 | NULL | show processlist | +----+------+-----------+------+---------+------+-------+------------------+

    Read the article

  • Cloudwatch alarms from Amazon AWS EC2 instance are always in UT, how can I change the alarm time zone to Eastern?

    - by RightHandedMonkey
    I am running an Amazon linux AMI and the alarms that I've setup are coming in all showing UT (universal time). It is inconvenient reading these alarms and I'd like them setup to read in eastern time zone (or America/New_York). I've already set my /etc/localtime to point to - /usr/share/zoneinfo/America/New_York ln -s /usr/share/zoneinfo/America/New_York /etc/localtime But it is still sending alarms in the UT timezone. Does anyone have a solution to this?

    Read the article

  • Bootstrapped Ubuntu 12.04 EC2 instance. Where to find log?

    - by nocode
    So I bootstrapped a shell script to install and run a bunch of tasks. Looks like the it ran for the most part, but I added one part and that was formatting an extra EBS volume. Pretty straightforward: mkfs.ext4 /dev/xvdf mkdir –m 000 /vol01 echo “/dev/xvdf /vol01 auto noatime 0 0” | sudo tee –a /etc/fstab sudo mount /vol01 I was able to install MongoDB, NGINX and Forever. I selected to use /dev/xdvf in the AWS console and see it. The 3rd line is not in fstab either. I've searched through various logs in /var/log/ but I don't really see much indicating the execution of the bootstrap. Logs that I see and looked through: auth.log boot.log dmesg dpkg.log syslog udev

    Read the article

  • With a node.js powered server on EC2, how can I decrease the TCP connection time?

    - by talentedmrjones
    While profiling my application I've noticed that in the Firebug Net panel, the "Connecting" time—that is the time waiting for a TCP connection—is consistently around 70–100ms. See image below: Of course in the grand scheme of things, 100ms is not long, but I have seen other services that respond with 0ms Connect time. So if other servers can, I should be able to as well. Any thoughts on how I might even beging to troubleshoot this?

    Read the article

  • How to setup a virtual host in Ubuntu running on Amazon EC2 instance?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • jQuery Tools alert works once (but only once)

    - by Jim Miller
    I'm trying to build a simple alert mechanism with jQuery Tools -- in response to a bit of Javascript code, pop up an overlay with a message and an OK button that, when clicked, makes the overlay go away. Trivial, or it should be. I've been slavishly following http://flowplayer.org/tools/demos/overlay/trigger.html, and have something that works fine the first time it's invoked, but only that time. If I repeat the JS action that should expose the overlay, it doesn't. My content/DIV: <div class='modal' id='the_alert'> <div id='modal_content' class='modal_content'> <h2>hi there</h2> this is the body <p> <button class='close'>OK</button> </p> </div> <div id='modal_background' class='modal_background'><img src='/images/overlay/f9f9f9-180.png' class='stretch' alt='' /></div> </div> and the Javascript: function showOverlayDialog() { $('#the_alert').overlay({ mask: {color: '#cccccc', loadSpeed: 200, opacity: 0.9}, closeOnClick: false, load: true }); } As I said: When showOverlayDialog() is invoked the first time, the overlay appears just like it should, and goes away when the "OK" button is clicked. But if I cause showOverlayDialog() to run again, without reloading the page, nothing happens. If I reload the page, then the pattern repeats -- the first invocation brings up the overlay, but the second one doesn't. I'm obviously missing something -- any advice out there? Thanks!

    Read the article

  • Free tools/libraries to compare tables with filtering in different databases and visualize/sync diff

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • What are the options for hosting a small Plone site?

    - by Tina Russell
    I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • Remove IP address from the URL of website using apache

    - by sapatos
    I'm on an EC2 instance and have a domain domain.com linked to the EC2 nameservers and it happily is serving my pages if I type domain.com in the URL. However when the page is served it resolves the url to: 1.1.1.10/directory/page.php. Using apache I've set up the following VirtualHost, following examples provided at http://httpd.apache.org/docs/2.0/dns-caveats.html Listen 80 NameVirtualHost 1.1.1.10:80 <VirtualHost 1.1.1.10:80> DocumentRoot /var/www/html/directory ServerName domain.com # Other directives here ... <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> </VirtualHost> However I'm not getting any changes to how the URL is displayed. This is the only VirtualHost configured on this site and I've confirmed its the one being used as I've managed to break it a number of times whilst experimenting with the configuration. The route53 entries I have are: domain.com A 1.1.1.10 domain.com NS ns-11.awsdns-11.com ns-111.awsdns-11.net ns-1111.awsdns-11.org ns-1111.awsdns-11.co.uk domain.com SOA ns-11.awsdns-11.com. awsdns-hostmaster.amazon.com. 1 1100 100 1101100 11100

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >