Search Results

Search found 11704 results on 469 pages for 'api'.

Page 3/469 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is there an SO API which can fetch all Questions & Answers for a particluar Keywords

    - by user4203
    I am looking for an API which helps in fetching all the Questions & Answers from SO and other Stack Exchange sites only on a particular "keyword". Later using XML RPC these questions will be posted as blog post and answers to this post's answers. Just wondering whether it's possible with an API. One of my friend suggested that we should Scrape but i don't want screen scraping instead i am looking for API requests which should handle this.

    Read the article

  • Should I use both WCF and ASP.NET Web API

    - by Mithir
    We already have a WCF API with basichttpbinding. Some of the calls have complex objects in both the response and request. We need to add RESTful abilities to the API. at first I tried adding a webHttp endpoint, but I got At most one body parameter can be serialized without wrapper elements If I made it Wrapped it wasn't pure as I need it to be. I got to read this, and this (which states "ASP.NET Web API is the new way to build RESTful service on .NET"). So my question is, should I make 2 APIs(2 different projects)? one for SOAP with WCF and one RESTful with ASP.NET Web API? is there anything wrong architecturally speaking with this approach?

    Read the article

  • HP openview servicedesk: looking for api information ?

    - by Zagorulkin Dmitry
    Good day folks. I am very confused in this situation. I need to implement system which will be based on HP open view service desk 4.5 api. But this system are reached the end of supporting period. On oficial site no information available I am looking an information about this API(articles, samples etc). Now i have only web-api.jar and javadoc. Methods in javadoc is bad documented. If you have any info, please share it with me. Thanks. Second question: there are methods for api(with huge amount of methods) understanding if it not documented or information is not available? PS:If it question is not belong here i will delete it.

    Read the article

  • How should an API use http basic authentication

    - by user1626384
    When an API requires that a client authenticates to it, i've seen two different scenarios used and I am wondering which case I should use for my situation. Example 1. An API is offered by a company to allow third parties to authenticate with a token and secret using HTTP Basic. Example 2. An API accepts a username and password via HTTP Basic to authenticate an end user. Generally they get a token back for future requests. My Setup: I will have an JSON API that I use as my backend for a mobile and web app. It seems like good practice for both the mobile and web app to send along a token and secret so only these two apps can access the API blocking any other third party. But the mobile and web app allow users to login and submit posts, view their data, etc. So I would want them to login via HTTP Basic as well on each request. Do I somehow use a combination of both these methods or only send the end user credentials (username and token) on each request? If I only send the end user credentials, do I store them in a cookie on the client?

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • Using paypal to process credit cards in Sweden through an API [on hold]

    - by Mastikator
    I'm looking for a Paypal API that lets me process credit cards to make payments without being redirected to a paypal site and without enforcing consumers to use their paypal account. And it needs to work in Sweden. The ones I've looked at (dodirectpayment, expresscheckout, paypalpro gateway) and none of them have let me process credit cards in Sweden via an API that doesn't force the user to visit the paypal login site. I have a form on my webpage that the user types their credit card number, ccv2, expiration, name, address, etc. I need an API that works in Sweden that simply processes the request, and it has to be without the step of being redirected into a paypal website. The ones that I have found only worked in a select few countries, is there an international solution? I've already spent over 12 work hours just looking for an API that meets my requirements.

    Read the article

  • Deprecate a web API: Best Practices?

    - by TheLQ
    Eventually you need to depreciate parts of your public web API. However I'm confused on what would be the best way to do it. If you have a large 3rd party app base just yanking old versions of the API seems like the wrong way to do it as almost all apps would fail overnight. However you can't keep ancient web api's available forever as it might be outdated or there are significant changes that make working with it impossible. What are some best practices for deprecating old web api's?

    Read the article

  • API always returns JSONObject or JSONArray Best practices

    - by Michael Laffargue
    I'm making an API that will return data in JSON. I also wanted on client side to make an utility class to call this API. Something like : JSONObject sendGetRequest(Url url); JSONObject sendPostRequest(Url url, HashMap postData); However sometimes the API send back array of object [{id:1},{id:2}] I now got two choices (): Make the method test for JSONArray or JSONObject and send back an Object that I will have to cast in the caller Make a method that returns JSONObject and one for JSONArray (like sendGetRequestAndReturnAsJSONArray) Make the server always send Arrays even for one element Make the server always send Objects wrapping my Array I going for the two last methods since I think it would be a good thing to force the API to send consistent type of data. But what would be the best practice (if one exist). Always send arrays? or always send objects?

    Read the article

  • How to create markers on a google local search api?

    - by cheesebunz
    As the question says, i do not want to use it from the API, and instead combine it on my code, but i can't seem to implement it with the code i have now. the markers do not come out and the search completely disappears if i try implementing with the code. This is a section of my codings : http://www.mediafire.com/?0minqxgwzmx

    Read the article

  • When is a Google Maps API key required?

    - by Thomas
    Recently Google changed it's policy on the use API keys. You're now supposed to no longer need an API key to place Google Maps on your website. And this worked perfectly. But now I have this map (without API key) running on my localhost, which works fine. But as soon as I place it online, I get a popup saying that I need another API key. And on another page on that website, Google Maps does work. Could it maybe have something to do with that the map that doesn't work have a lot (30+) of markers on it? Actually using an API key wouldn't be a very nice solution to me, as this is part of a Wordpress plugin used on many websites.

    Read the article

  • Facebook Graph API authentication in canvas app and track session

    - by cdpnet
    Short question is: how can i use graph api oauth redirects mechanism to authenticate user and save retrieved access_token and also use javascript SDK when needed (the problem is javascript SDK will have different access_token when initialized). I have initially setup my facebook iframe canvas app, with single sign on. This works well with graph api, as I am able to use access_token saved by facebook's javascript when it detects sessionchange(user logged in). But, I want to rather not do single sign-on. But, use graph api redirect and force user to send to a permissions dialog. But, if he has already given permissions, I shouldn't redirect user. How to handle this? Another question: I have done graph api redirects for authentication and have retrieved access_token also. But then, what if I want to use javascript call FB.ui to do stream.Publish? I think it will use it's own access_token which is set during FB.init and detecting session. So, I am looking for some path here. How to use graph api for authentication and also use facebook's javascript SDK when needed. P.S. I'm using ASP .NET MVC 2. I have an authentication filter developed, which needs to detect the user's authentication state and redirect.(currently it does this to graph api authorize url)

    Read the article

  • Does Google Maps API v3 allow larger zoom values ?

    - by Dr1Ku
    If you use the satellite GMapType using this Google-provided example in v3 of the API, the maximum zoom level has a scale of 2m / 10ft , whereas using the v2 version of another Google-provided example (had to use another one since the control-simple doesn't have the scale control) yields the maximum scale of 20m / 50ft. Is this a new "feature" of v3 ? I have to mention that I've tested the examples in the same GLatLng regions - so my guess is that tile detail level doesn't influence it, am I mistaken ? As mentioned in another question, v3 is to be considered of very Labs-y/beta quality, so use in production should be discouraged for the time being. I've been drawn to the subject since I have to "increase the zoom level of a GMap", the answers here seem to suggest using GTileLayer, and I'm considering GMapCreator, although this will involve some effort. What I'm trying to achieve is to have a larger zoom level, a scale of 2m / 10ft would be perfect, I have a map where the tiles aren't that hi-res and quite a few markers. Seeing that the area doesn't have hi-res tiles, the distance between the markers is really tiny, creating some problematic overlapping. Or better yet, how can you create a custom Map which allows higher zoom levels, as by the Google Campus, where the 2m / 10ft scale is achieved, and not use your own tileserver ? I've seen an example on a fellow Stackoverflower's GMaps sandbox , where the tiles are manually created based on the zoom level. I don't think I'm making any more sense, so I'm just going to end this big question here, I've been wondering around trying to find a solution for hours now. Hope that someone comes to my aid though ! Thank you in advance !

    Read the article

  • Kernel api's or using api's in the kernel

    - by user513647
    Hello everybody I'd like to know if and how I can access api calls inside the kernel. I need them to preform several integrity checks on a program of mine running in user mode. But I don't know how I can access the api's and funcions required to do so. Does anybody know how to obtain the process id of my user mode proces? and how to access all it's memory to preform the check? Thanks in advance ps: My I'm on a windows xp machine

    Read the article

  • API Message Localization

    - by Jesse Taber
    In my post, “Keep Localizable Strings Close To Your Users” I talked about the internationalization and localization difficulties that can arise when you sprinkle static localizable strings throughout the different logical layers of an application. The main point of that post is that you should have your localizable strings reside as close to the user-facing modules of your application as possible. For example, if you’re developing an ASP .NET web forms application all of the localizable strings should be kept in .resx files that are associated with the .aspx views of the application. In this post I want to talk about how this same concept can be applied when designing and developing APIs. An API Facilitates Machine-to-Machine Interaction You can typically think about a web, desktop, or mobile application as a collection “views” or “screens” through which users interact with the underlying logic and data. The application can be designed based on the assumption that there will be a human being on the other end of the screen working the controls. You are designing a machine-to-person interaction and the application should be built in a way that facilitates the user’s clear understanding of what is going on. Dates should be be formatted in a way that the user will be familiar with, messages should be presented in the user’s preferred language, etc. When building an API, however, there are no screens and you can’t make assumptions about who or what is on the other end of each call. An API is, by definition, a machine-to-machine interaction. A machine-to-machine interaction should be built in a way that facilitates a clear and unambiguous understanding of what is going on. Dates and numbers should be formatted in predictable and standard ways (e.g. ISO 8601 dates) and messages should be presented in machine-parseable formats. For example, consider an API for a time tracking system that exposes a resource for creating a new time entry. The JSON for creating a new time entry for a user might look like: 1: { 2: "userId": 4532, 3: "startDateUtc": "2012-10-22T14:01:54.98432Z", 4: "endDateUtc": "2012-10-22T11:34:45.29321Z" 5: }   Note how the parameters for start and end date are both expressed as ISO 8601 compliant dates in UTC. Using a date format like this in our API leaves little room for ambiguity. It’s also important to note that using ISO 8601 dates is a much, much saner thing than the \/Date(<milliseconds since epoch>)\/ nonsense that is sometimes used in JSON serialization. Probably the most important thing to note about the JSON snippet above is the fact that the end date comes before the start date! The API should recognize that and disallow the time entry from being created, returning an error to the caller. You might inclined to send a response that looks something like this: 1: { 2: "errors": [ {"message" : "The end date must come after the start date"}] 3: }   While this may seem like an appropriate thing to do there are a few problems with this approach: What if there is a user somewhere on the other end of the API call that doesn’t speak English?  What if the message provided here won’t fit properly within the UI of the application that made the API call? What if the verbiage of the message isn’t consistent with the rest of the application that made the API call? What if there is no user directly on the other end of the API call (e.g. this is a batch job uploading time entries once per night unattended)? The API knows nothing about the context from which the call was made. There are steps you could take to given the API some context (e.g.allow the caller to send along a language code indicating the language that the end user speaks), but that will only get you so far. As the designer of the API you could make some assumptions about how the API will be called, but if we start making assumptions we could very easily make the wrong assumptions. In this situation it’s best to make no assumptions and simply design the API in such a way that the caller has the responsibility to convey error messages in a manner that is appropriate for the context in which the error was raised. You would work around some of these problems by allowing callers to add metadata to each request describing the context from which the call is being made (e.g. accepting a ‘locale’ parameter denoting the desired language), but that will add needless clutter and complexity. It’s better to keep the API simple and push those context-specific concerns down to the caller whenever possible. For our very simple time entry example, this can be done by simply changing our error message response to look like this: 1: { 2: "errors": [ {"code": 100}] 3: }   By changing our error error from exposing a string to a numeric code that is easily parseable by another application, we’ve placed all of the responsibility for conveying the actual meaning of the error message on the caller. It’s best to have the caller be responsible for conveying this meaning because the caller understands the context much better than the API does. Now the caller can see error code 100, know that it means that the end date submitted falls before the start date and take appropriate action. Now all of the problems listed out above are non-issues because the caller can simply translate the error code of ‘100’ into the proper action and message for the current context. The numeric code representation of the error is a much better way to facilitate the machine-to-machine interaction that the API is meant to facilitate. An API Does Have Human Users While APIs should be built for machine-to-machine interaction, people still need to wire these interactions together. As a programmer building a client application that will consume the time entry API I would find it frustrating to have to go dig through the API documentation every time I encounter a new error code (assuming the documentation exists and is accurate). The numeric error code approach hurts the discoverability of the API and makes it painful to integrate with. We can help ease this pain by merging our two approaches: 1: { 2: "errors": [ {"code": 100, "message" : "The end date must come after the start date"}] 3: }   Now we have an easily parseable numeric error code for the machine-to-machine interaction that the API is meant to facilitate and a human-readable message for programmers working with the API. The human-readable message here is not intended to be viewed by end-users of the API and as such is not really a “localizable string” in my opinion. We could opt to expose a locale parameter for all API methods and store translations for all error messages, but that’s a lot of extra effort and overhead that doesn’t add a lot real value to the API. I might be a bit of an “ugly American”, but I think it’s probably fine to have the API return English messages when the target for those messages is a programmer. When resources are limited (which they always are), I’d argue that you’re better off hard-coding these messages in English and putting more effort into building more useful features, improving security, tweaking performance, etc.

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Question about API and Web application code sharing

    - by opendd
    This is a design question. I have a multi part application with several user types. There is a user client for the patient that interacts with a web service. There is an API evolving behind the web service that will be exposed to institutional "users" and an interface for clinicians, researchers and admin types. The patient UI is Flex. The clinician/admin portion of the application is RoR. The API is RoR/rack based. The web service component is Java WS. All components access the same data source. These components are deployed as separate components to their own subdomains. This decision was made to allow for scaling the components individually as needed. Initially, the decision was made to split the code for the RoR Web application from the RoR API. This decision was made in the interests of security and keeping the components focused on specific tasks. Over the course of time, there is necessarily going to be overlap and I am second guessing my decision to keep the code totally separate. I am noticing code being lifted from the admin side being lifted, modified and used in the API. This being the case, I have been considering merging the Ruby based repositories. I am interested in ideas and insight on this situation along with the reasoning behind your thoughts. Thanks.

    Read the article

  • How to share a folder using the Ubuntu One Web API

    - by Mario César
    I have successfully implement OAuth Authorization with Ubuntu One in Django, Here are my views and models: https://gist.github.com/mariocesar/7102729 Right now, I can use the file_storage ubuntu api, for example the following, will ask if Path exists, then create the directory, and then get the information on the created path to probe is created. >>> user.oauth_access_token.get_file_storage(volume='/~/Ubuntu One', path='/Websites/') <Response [404]> >>> user.oauth_access_token.put_file_storage(volume='/~/Ubuntu One', path='/Websites/', data={"kind": "directory"}) <Response [200]> >>> user.oauth_access_token.get_file_storage(volume='/~/Ubuntu One', path='/Websites/').json() {u'content_path': u'/content/~/Ubuntu One/Websites', u'generation': 10784, u'generation_created': 10784, u'has_children': False, u'is_live': True, u'key': u'MOQgjSieTb2Wrr5ziRbNtA', u'kind': u'directory', u'parent_path': u'/~/Ubuntu One', u'path': u'/Websites', u'resource_path': u'/~/Ubuntu One/Websites', u'volume_path': u'/volumes/~/Ubuntu One', u'when_changed': u'2013-10-22T15:34:04Z', u'when_created': u'2013-10-22T15:34:04Z'} So it works, it's great I'm happy about that. But I can't share a folder. My question is? How can I share a folder using the api? I found no web api to do this, the Ubuntu One SyncDaemon tool is the only mention on solving this https://one.ubuntu.com/developer/files/store_files/syncdaemontool#ubuntuone.platform.tools.SyncDaemonTool.offer_share But I'm reluctant to maintain a DBUS and a daemon in my server for every Ubuntu One connection I have authorization for. Any one have an idea how can I using a web API to programmatically share a folder? even better using the OAuth authorization tokens that I already have.

    Read the article

  • API Wordpress & Inksoft

    - by user105405
    I am new to this whole website design and API bit. My husband has bought a license for the program InkSoft. Their site does not offer very much customization, so we decided to buy a Wordpress.org site that is hosted through godaddy. With all of that said, I am trying to figure out a way to take the products that are on InkSoft's website, which get their information from the suppliers' warehouses (for things like inventory), and put them on the Wordpress site. There is an area on InkSoft where I can access "Store API...API feeds." I guess I am just confused on where to put this type of stuff in the WordPress site or how to put it in there? If I go to the "Products" area on this Store API, I am given a URL that deals with the product stuff and I am also given a HUGE list of stuff that contains stuff such as: < ProductCategoryList < vw_product_categories product_category_id="1000076" name="Most Popular" path="Most Popular" thumburl_front="REPLACE_DOMAIN_WITH/images/publishers/2433/ProductCategories/Most_Popular/80.gif" / Can everyone give me directions on what and how to use all this stuff? Thank you!!

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • Google Analytics API - Super simple?

    - by Jens Törnell
    Google Analytics API - Too complicated? I've read about Google Analytics API but heard of others that it is a bit complicated to make it work. I use PHP. Copy / paste example My question is if there is a copy / paste example anywhere on the web for getting a stats curve of the latest month, or just the numbers for that period? Important I need to use the new Google Analytics API version for 2012. The other one is going to die soon.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >