Search Results

Search found 11960 results on 479 pages for 'cups api'.

Page 5/479 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • "apt-get -f install" fails with "/usr/bin/dpkg returned an error code (1)"

    - by parsley72
    I started out trying to install CVS: $ sudo apt-get install cvs Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libcups2 : Breaks: libcups2:i386 (!= 1.5.3-0ubuntu3) but 1.5.3-0ubuntu4 is to be installed libcups2:i386 : Breaks: libcups2 (!= 1.5.3-0ubuntu4) but 1.5.3-0ubuntu3 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). But when I try this I get: $ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following package was automatically installed and is no longer required: tzdata-java Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libcups2 The following packages will be upgraded: libcups2 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 14 not fully installed or removed. Need to get 0 B/172 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: error processing libcups2 (--configure): libcups2:amd64 1.5.3-0ubuntu3 cannot be configured because libcups2:i386 is in a different version (1.5.3-0ubuntu4) dpkg: error processing libcups2:i386 (--configure): libcups2:i386 1.5.3-0ubuntu4 cannot be configured because libcups2:amd64 is in a different version (1.5.3-0ubuntu3) dpkg: dependency problems prevent configuration of libcupsmime1: libcupsmime1 depends on libcups2 (>= 1.5~); however: Package libcups2 is not configured yet. dpkg: error processing libcupsmime1 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libcupscgi1: libcupscgi1 depends on libcups2 (>= 1.4.0); however: Package libcups2 is not configured yet. dpkg: error processing libcupscgi1 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libcupsppdc1: libcupsppdc1 depends on libcups2 (>= 1.4.0); however: Package libcups2 is not configured yet. dpkg: error processing libcupsppdc1 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of cups-client: cups-client depends on libcups2 (>= 1.5.0); however: Package libcups2 is not configured yet. dpkg: error processing cups-client (--configure): dependency problems - leaviNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already ng unconfigured dpkg: dependency problems prevent configuration of cups-ppdc: cups-ppdc depends on libcups2 (>= 1.4.0); however: Package libcups2 is not configured yet. cups-ppdc depends on libcupsppdc1 (>= 1.4.0); however: Package libcupsppdc1 is not configured yet. dpkg: error processing cups-ppdc (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of cups: cups depends on libcups2 (>= 1.5.0); however: Package libcups2 is not configured yet. cups depends on libcupscgi1 (>= 1.4.2); however: Package libcupscgi1 is not configured yet. cups depends on libcupsmime1 (>= 1.5.0); however: Package libcupsmime1 is not configured yet. cups depends on libcupsppdc1 (>= 1.4.0); however: Package libcupsppdc1 is not configured yet. cups depends on cups-client (>= 1.5.3-0ubuntu4); however: Package cups-client is not configured yet. cups depends on cups-ppdc; however: Package cups-ppdc is not configured yet. dpkg: error processing cups (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libcupsdriver1: libcupsdriver1 depends on libcups2 (>= 1.4.0); however: Package libcups2 is not configured yet. dpkg: error processing libcupsdriver1 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of openjdk-7-jre-headless: openjdk-7-jre-headless depends on libcups2 (>= 1.4.0); however: Package libcups2 is not configured yet. dpkg: error processing openjdk-7-jre-headless (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of openjdk-7-jre: openjdk-7-jre depends on openjdk-7-jre-headless (= 7u7-2.3.2-1ubuntu0.12.04.1); however: Package openjdk-7-jre-headless is not configured yet. openjdk-7-jre depends on libcups2 (>= 1.4.0); however: Package libcups2 is not configured yet. dpkg: error processing openjdk-7-jre (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of cups-bsd: cups-bsd depends on libcups2 (>= 1.4.0); however: Package libcups2 is not configured yet. cups-bsd depends on cups-client (= 1.5.3-0ubuntu4); however: Package cups-client is not configured yet. dpkg: error processing cups-bsd (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of icedtea-7-jre-jamvm: icedtea-7-jre-jamvm depends on openjdk-7-jre-headless (= 7u7-2.3.2-1ubuntu0.12.04.1); however: Package openjdk-7-jre-headless is not configured yet. dpkg: error processing icedtea-7-jre-jamvm (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of openjdk-7-jre-lib: openjdk-7-jre-lib depends on openjdk-7-jre-headless (>= 7~b130~pre0); however: Package openjdk-7-jre-headless is not configured yet. dpkg: error processing openjdk-7-jre-lib (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libcups2 libcups2:i386 libcupsmime1 libcupscgi1 libcupsppdc1 cups-client cups-ppdc cups libcupsdriver1 openjdk-7-jre-headless openjdk-7-jre cups-bsd icedtea-7-jre-jamvm openjdk-7-jre-lib E: Sub-process /usr/bin/dpkg returned an error code (1) I've done "apt-get update" and "apt-get upgrade" and this hasn't fixed the problem: $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libcups2 : Breaks: libcups2:i386 (!= 1.5.3-0ubuntu3) but 1.5.3-0ubuntu4 is installed libcups2:i386 : Breaks: libcups2 (!= 1.5.3-0ubuntu4) but 1.5.3-0ubuntu3 is installed E: Unmet dependencies. Try using -f.

    Read the article

  • How to create markers on a google local search api?

    - by cheesebunz
    As the question says, i do not want to use it from the API, and instead combine it on my code, but i can't seem to implement it with the code i have now. the markers do not come out and the search completely disappears if i try implementing with the code. This is a section of my codings : http://www.mediafire.com/?0minqxgwzmx

    Read the article

  • When is a Google Maps API key required?

    - by Thomas
    Recently Google changed it's policy on the use API keys. You're now supposed to no longer need an API key to place Google Maps on your website. And this worked perfectly. But now I have this map (without API key) running on my localhost, which works fine. But as soon as I place it online, I get a popup saying that I need another API key. And on another page on that website, Google Maps does work. Could it maybe have something to do with that the map that doesn't work have a lot (30+) of markers on it? Actually using an API key wouldn't be a very nice solution to me, as this is part of a Wordpress plugin used on many websites.

    Read the article

  • Windows can see Ubuntu Server printer, but can't print to it

    - by Mike
    I have an old desktop that I'm trying to set up as a home backup/print server. Backup was trivial, but am having issues getting the printing to work. The printer is connected to the server running Ubuntu Server 9.10 (no gui). If I access the printer via http://hostname:631/printers/, I am able to print a test page, so I know the printer is working; however, I am having no luck from Windows. Windows can see the printer when browsed via \hostname\, but I am unable to connect. Windows says "Windows cannot connect to the printer" without indicating why. Any suggestions? From /etc/samba/smb.conf: [global] workgroup = WORKGROUP dns proxy = no security = user username map = /etc/samba/smbusers encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user load printers = yes printing = cups printcap name = cups [printers] comment = All Printers browseable = no path = /var/spool/samba writable = no printable = yes guest ok = yes read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = yes From /etc/cups/cupsd.conf: LogLevel warn SystemGroup lpadmin Port 631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseRemoteProtocols CUPS BrowseAddress @LOCAL BrowseLocalProtocols CUPS dnssd DefaultAuthType Basic <Location /> Order allow,deny Allow all </Location> <Location /admin> Order allow,deny Allow all </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow all </Location> <Policy default> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> <Policy authenticated> <Limit Create-Job Print-Job Print-URI> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Facebook Graph API authentication in canvas app and track session

    - by cdpnet
    Short question is: how can i use graph api oauth redirects mechanism to authenticate user and save retrieved access_token and also use javascript SDK when needed (the problem is javascript SDK will have different access_token when initialized). I have initially setup my facebook iframe canvas app, with single sign on. This works well with graph api, as I am able to use access_token saved by facebook's javascript when it detects sessionchange(user logged in). But, I want to rather not do single sign-on. But, use graph api redirect and force user to send to a permissions dialog. But, if he has already given permissions, I shouldn't redirect user. How to handle this? Another question: I have done graph api redirects for authentication and have retrieved access_token also. But then, what if I want to use javascript call FB.ui to do stream.Publish? I think it will use it's own access_token which is set during FB.init and detecting session. So, I am looking for some path here. How to use graph api for authentication and also use facebook's javascript SDK when needed. P.S. I'm using ASP .NET MVC 2. I have an authentication filter developed, which needs to detect the user's authentication state and redirect.(currently it does this to graph api authorize url)

    Read the article

  • Printings using CUPS, when can my app quit?

    - by KPexEA
    I have an linux app that uses cups for printing, but I've noticed that if I print and then quit my app right away my printout never appears. So I assume that my app has to wait for it to actually come out of the printer before quitting, so does anyone know how to tell when it's finished printing?? I'm using libcups to print a postscript file that my app generates. So I use the command to print the file and it then returns back to my app. So my app thinks that the document is off to the printer queue when I guess it has not made it there yet. So rather than have all my users have to look on the screen for the printer icon in the system tray I would rather have a solution in code, so if they try and quit before it has really been sent off I can alert them to the fact. Also the file I generate is a temporary file so it would be nice to know when it is finished with so I can delete it.

    Read the article

  • Does Google Maps API v3 allow larger zoom values ?

    - by Dr1Ku
    If you use the satellite GMapType using this Google-provided example in v3 of the API, the maximum zoom level has a scale of 2m / 10ft , whereas using the v2 version of another Google-provided example (had to use another one since the control-simple doesn't have the scale control) yields the maximum scale of 20m / 50ft. Is this a new "feature" of v3 ? I have to mention that I've tested the examples in the same GLatLng regions - so my guess is that tile detail level doesn't influence it, am I mistaken ? As mentioned in another question, v3 is to be considered of very Labs-y/beta quality, so use in production should be discouraged for the time being. I've been drawn to the subject since I have to "increase the zoom level of a GMap", the answers here seem to suggest using GTileLayer, and I'm considering GMapCreator, although this will involve some effort. What I'm trying to achieve is to have a larger zoom level, a scale of 2m / 10ft would be perfect, I have a map where the tiles aren't that hi-res and quite a few markers. Seeing that the area doesn't have hi-res tiles, the distance between the markers is really tiny, creating some problematic overlapping. Or better yet, how can you create a custom Map which allows higher zoom levels, as by the Google Campus, where the 2m / 10ft scale is achieved, and not use your own tileserver ? I've seen an example on a fellow Stackoverflower's GMaps sandbox , where the tiles are manually created based on the zoom level. I don't think I'm making any more sense, so I'm just going to end this big question here, I've been wondering around trying to find a solution for hours now. Hope that someone comes to my aid though ! Thank you in advance !

    Read the article

  • Kernel api's or using api's in the kernel

    - by user513647
    Hello everybody I'd like to know if and how I can access api calls inside the kernel. I need them to preform several integrity checks on a program of mine running in user mode. But I don't know how I can access the api's and funcions required to do so. Does anybody know how to obtain the process id of my user mode proces? and how to access all it's memory to preform the check? Thanks in advance ps: My I'm on a windows xp machine

    Read the article

  • API Message Localization

    - by Jesse Taber
    In my post, “Keep Localizable Strings Close To Your Users” I talked about the internationalization and localization difficulties that can arise when you sprinkle static localizable strings throughout the different logical layers of an application. The main point of that post is that you should have your localizable strings reside as close to the user-facing modules of your application as possible. For example, if you’re developing an ASP .NET web forms application all of the localizable strings should be kept in .resx files that are associated with the .aspx views of the application. In this post I want to talk about how this same concept can be applied when designing and developing APIs. An API Facilitates Machine-to-Machine Interaction You can typically think about a web, desktop, or mobile application as a collection “views” or “screens” through which users interact with the underlying logic and data. The application can be designed based on the assumption that there will be a human being on the other end of the screen working the controls. You are designing a machine-to-person interaction and the application should be built in a way that facilitates the user’s clear understanding of what is going on. Dates should be be formatted in a way that the user will be familiar with, messages should be presented in the user’s preferred language, etc. When building an API, however, there are no screens and you can’t make assumptions about who or what is on the other end of each call. An API is, by definition, a machine-to-machine interaction. A machine-to-machine interaction should be built in a way that facilitates a clear and unambiguous understanding of what is going on. Dates and numbers should be formatted in predictable and standard ways (e.g. ISO 8601 dates) and messages should be presented in machine-parseable formats. For example, consider an API for a time tracking system that exposes a resource for creating a new time entry. The JSON for creating a new time entry for a user might look like: 1: { 2: "userId": 4532, 3: "startDateUtc": "2012-10-22T14:01:54.98432Z", 4: "endDateUtc": "2012-10-22T11:34:45.29321Z" 5: }   Note how the parameters for start and end date are both expressed as ISO 8601 compliant dates in UTC. Using a date format like this in our API leaves little room for ambiguity. It’s also important to note that using ISO 8601 dates is a much, much saner thing than the \/Date(<milliseconds since epoch>)\/ nonsense that is sometimes used in JSON serialization. Probably the most important thing to note about the JSON snippet above is the fact that the end date comes before the start date! The API should recognize that and disallow the time entry from being created, returning an error to the caller. You might inclined to send a response that looks something like this: 1: { 2: "errors": [ {"message" : "The end date must come after the start date"}] 3: }   While this may seem like an appropriate thing to do there are a few problems with this approach: What if there is a user somewhere on the other end of the API call that doesn’t speak English?  What if the message provided here won’t fit properly within the UI of the application that made the API call? What if the verbiage of the message isn’t consistent with the rest of the application that made the API call? What if there is no user directly on the other end of the API call (e.g. this is a batch job uploading time entries once per night unattended)? The API knows nothing about the context from which the call was made. There are steps you could take to given the API some context (e.g.allow the caller to send along a language code indicating the language that the end user speaks), but that will only get you so far. As the designer of the API you could make some assumptions about how the API will be called, but if we start making assumptions we could very easily make the wrong assumptions. In this situation it’s best to make no assumptions and simply design the API in such a way that the caller has the responsibility to convey error messages in a manner that is appropriate for the context in which the error was raised. You would work around some of these problems by allowing callers to add metadata to each request describing the context from which the call is being made (e.g. accepting a ‘locale’ parameter denoting the desired language), but that will add needless clutter and complexity. It’s better to keep the API simple and push those context-specific concerns down to the caller whenever possible. For our very simple time entry example, this can be done by simply changing our error message response to look like this: 1: { 2: "errors": [ {"code": 100}] 3: }   By changing our error error from exposing a string to a numeric code that is easily parseable by another application, we’ve placed all of the responsibility for conveying the actual meaning of the error message on the caller. It’s best to have the caller be responsible for conveying this meaning because the caller understands the context much better than the API does. Now the caller can see error code 100, know that it means that the end date submitted falls before the start date and take appropriate action. Now all of the problems listed out above are non-issues because the caller can simply translate the error code of ‘100’ into the proper action and message for the current context. The numeric code representation of the error is a much better way to facilitate the machine-to-machine interaction that the API is meant to facilitate. An API Does Have Human Users While APIs should be built for machine-to-machine interaction, people still need to wire these interactions together. As a programmer building a client application that will consume the time entry API I would find it frustrating to have to go dig through the API documentation every time I encounter a new error code (assuming the documentation exists and is accurate). The numeric error code approach hurts the discoverability of the API and makes it painful to integrate with. We can help ease this pain by merging our two approaches: 1: { 2: "errors": [ {"code": 100, "message" : "The end date must come after the start date"}] 3: }   Now we have an easily parseable numeric error code for the machine-to-machine interaction that the API is meant to facilitate and a human-readable message for programmers working with the API. The human-readable message here is not intended to be viewed by end-users of the API and as such is not really a “localizable string” in my opinion. We could opt to expose a locale parameter for all API methods and store translations for all error messages, but that’s a lot of extra effort and overhead that doesn’t add a lot real value to the API. I might be a bit of an “ugly American”, but I think it’s probably fine to have the API return English messages when the target for those messages is a programmer. When resources are limited (which they always are), I’d argue that you’re better off hard-coding these messages in English and putting more effort into building more useful features, improving security, tweaking performance, etc.

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Question about API and Web application code sharing

    - by opendd
    This is a design question. I have a multi part application with several user types. There is a user client for the patient that interacts with a web service. There is an API evolving behind the web service that will be exposed to institutional "users" and an interface for clinicians, researchers and admin types. The patient UI is Flex. The clinician/admin portion of the application is RoR. The API is RoR/rack based. The web service component is Java WS. All components access the same data source. These components are deployed as separate components to their own subdomains. This decision was made to allow for scaling the components individually as needed. Initially, the decision was made to split the code for the RoR Web application from the RoR API. This decision was made in the interests of security and keeping the components focused on specific tasks. Over the course of time, there is necessarily going to be overlap and I am second guessing my decision to keep the code totally separate. I am noticing code being lifted from the admin side being lifted, modified and used in the API. This being the case, I have been considering merging the Ruby based repositories. I am interested in ideas and insight on this situation along with the reasoning behind your thoughts. Thanks.

    Read the article

  • How to share a folder using the Ubuntu One Web API

    - by Mario César
    I have successfully implement OAuth Authorization with Ubuntu One in Django, Here are my views and models: https://gist.github.com/mariocesar/7102729 Right now, I can use the file_storage ubuntu api, for example the following, will ask if Path exists, then create the directory, and then get the information on the created path to probe is created. >>> user.oauth_access_token.get_file_storage(volume='/~/Ubuntu One', path='/Websites/') <Response [404]> >>> user.oauth_access_token.put_file_storage(volume='/~/Ubuntu One', path='/Websites/', data={"kind": "directory"}) <Response [200]> >>> user.oauth_access_token.get_file_storage(volume='/~/Ubuntu One', path='/Websites/').json() {u'content_path': u'/content/~/Ubuntu One/Websites', u'generation': 10784, u'generation_created': 10784, u'has_children': False, u'is_live': True, u'key': u'MOQgjSieTb2Wrr5ziRbNtA', u'kind': u'directory', u'parent_path': u'/~/Ubuntu One', u'path': u'/Websites', u'resource_path': u'/~/Ubuntu One/Websites', u'volume_path': u'/volumes/~/Ubuntu One', u'when_changed': u'2013-10-22T15:34:04Z', u'when_created': u'2013-10-22T15:34:04Z'} So it works, it's great I'm happy about that. But I can't share a folder. My question is? How can I share a folder using the api? I found no web api to do this, the Ubuntu One SyncDaemon tool is the only mention on solving this https://one.ubuntu.com/developer/files/store_files/syncdaemontool#ubuntuone.platform.tools.SyncDaemonTool.offer_share But I'm reluctant to maintain a DBUS and a daemon in my server for every Ubuntu One connection I have authorization for. Any one have an idea how can I using a web API to programmatically share a folder? even better using the OAuth authorization tokens that I already have.

    Read the article

  • API Wordpress & Inksoft

    - by user105405
    I am new to this whole website design and API bit. My husband has bought a license for the program InkSoft. Their site does not offer very much customization, so we decided to buy a Wordpress.org site that is hosted through godaddy. With all of that said, I am trying to figure out a way to take the products that are on InkSoft's website, which get their information from the suppliers' warehouses (for things like inventory), and put them on the Wordpress site. There is an area on InkSoft where I can access "Store API...API feeds." I guess I am just confused on where to put this type of stuff in the WordPress site or how to put it in there? If I go to the "Products" area on this Store API, I am given a URL that deals with the product stuff and I am also given a HUGE list of stuff that contains stuff such as: < ProductCategoryList < vw_product_categories product_category_id="1000076" name="Most Popular" path="Most Popular" thumburl_front="REPLACE_DOMAIN_WITH/images/publishers/2433/ProductCategories/Most_Popular/80.gif" / Can everyone give me directions on what and how to use all this stuff? Thank you!!

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • Unit testing ASP.NET Web API controllers that rely on the UrlHelper

    - by cibrax
    UrlHelper is the class you can use in ASP.NET Web API to automatically infer links from the routing table without hardcoding anything. For example, the following code uses the helper to infer the location url for a new resource,public HttpResponseMessage Post(User model) { var response = Request.CreateResponse(HttpStatusCode.Created, user); var link = Url.Link("DefaultApi", new { id = id, controller = "Users" }); response.Headers.Location = new Uri(link); return response; } That code uses a previously defined route “DefaultApi”, which you might configure in the HttpConfiguration object (This is the route generated by default when you create a new Web API project). The problem with UrlHelper is that it requires from some initialization code before you can invoking it from a unit test (for testing the Post method in this example). If you don’t initialize the HttpConfiguration and Request instances associated to the controller from the unit test, it will fail miserably. After digging into the ASP.NET Web API source code a little bit, I could figure out what the requirements for using the UrlHelper are. It relies on the routing table configuration, and a few properties you need to add to the HttpRequestMessage. The following code illustrates what’s needed,var controller = new UserController(); controller.Configuration = new HttpConfiguration(); var route = controller.Configuration.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var routeData = new HttpRouteData(route, new HttpRouteValueDictionary { { "id", "1" }, { "controller", "Users" } } ); controller.Request = new HttpRequestMessage(HttpMethod.Post, "http://localhost:9091/"); controller.Request.Properties.Add(HttpPropertyKeys.HttpConfigurationKey, controller.Configuration); controller.Request.Properties.Add(HttpPropertyKeys.HttpRouteDataKey, routeData);  The HttpRouteData instance should be initialized with the route values you will use in the controller method (“id” and “controller” in this example). Once you have correctly setup all those properties, you shouldn’t have any problem to use the UrlHelper. There is no need to mock anything else. Enjoy!!.

    Read the article

  • Print over remote CUPS server, but just show a subset of the printers.

    - by jdm
    I'd like to print from my Ubuntu laptop (karmic) to some networked printers. Our organisation uses a CUPS server with several hundred printers. What I know I can do is: CUPS_SERVER=printers.company.com acroread document.pdf and then Adobe Reader shows me all available printers to select from. However, it takes a couple of minutes to display the large list, which is really annoying. (The desktop PCs here suffer from this, too.) The other option is to add a new printer with an address like ipp://printers.company.com/printer/bldg1_hp8150 (to the Ubuntu printer configuration = local CUPS server). However, it asks me for a driver. I don't want to / can't always specify a driver, since some printers don't appear in the list. I'd like to let the remote CUPS server handle the driver part (like it does when i set CUPS_SERVER), and do no more preprocessing/"driver stuff" on my side. The ideal thing would be if I could somehow add the remote printer list to my local cups server, and apply a filter, so that it would just display printers a la bldg1_*. This feature was available in KDE3.?, but I can't find something similar in Ubuntu/Gnome. Any suggestions?

    Read the article

  • Canon MX870 printer only shows "Processing" on the status LCD

    - by Nick
    I had my Canon MX870 installed perfectly fine in 11.10, but since upgrading to 12.04, it no longer works. The printer is recognized in print settings and when I attempt to print a test page, the printer LCD displays a "Processing" message, but then it disappears and nothing happens. Here are my logs (note that printing did not succeed despite the access logs showing success): # /var/log/cups/access_log localhost - - [22/May/2012:12:29:35 -0400] "POST /printers/Canon-MX870 HTTP/1.1" 200 412 Print-Job successful-ok - # /var/log/cups/error_log W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-Gray..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-RGB..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateDevice: org.freedesktop.ColorManager.AlreadyExists:device id 'cups-Canon-MX870' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-Gray..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-RGB..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateDevice: org.freedesktop.ColorManager.AlreadyExists:device id 'cups-Canon-MX870' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-Gray..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-RGB..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateDevice: org.freedesktop.ColorManager.AlreadyExists:device id 'cups-Canon-MX870' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-Gray..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-RGB..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateDevice: org.freedesktop.ColorManager.AlreadyExists:device id 'cups-Canon-MX870' already exists

    Read the article

  • Google Analytics API - Super simple?

    - by Jens Törnell
    Google Analytics API - Too complicated? I've read about Google Analytics API but heard of others that it is a bit complicated to make it work. I use PHP. Copy / paste example My question is if there is a copy / paste example anywhere on the web for getting a stats curve of the latest month, or just the numbers for that period? Important I need to use the new Google Analytics API version for 2012. The other one is going to die soon.

    Read the article

  • Developing JSON API for a Carpool Engine

    - by Siddharth
    I am developing a new set of API methods for carpooling/cab booking, so if a developer needs to develop an app or webportal for carpooling, he can call my JSON API. Basically making it easy for developers. My API current has: AddVehicle AddJourney SearchJourney SubscribeToThisJourney(journey) SubscriberList(journey) to get list of people who have subscribed for this journey AcceptSubscription(subscriber) AcceptedSubcriberList SubscriberList to get list of providers I have subscribed to I need help with replacing subscriber with something else. It's difficult to remember, and confusing when you see 3 methods that mean very different things: SubscriberList, SubscribedToThisJourneyList and AcceptedSubscriberList. Confusing to remember. One is a list of who I have subscribed to Who has subscribed to me Whose subscription I have accepted How can I name these methods to make them easier to understand and remember?

    Read the article

  • Google Site Search -- How to use as API?

    - by John Isaacks
    I am trying to get an API that I can use to do searches on my own site. Google has something called site search and something called custom search. What is the difference? I make a new site search, then it is listed on a page with "custom search" in the heading. This is really confusing. I just want an API that I can use to search my site. I would prefer json to xml as the results. And if this service is offered by someone other than Google, that is fine too. The ones that I create at Google want me to embed a premade search box into my site. I do not want that, I want an API that I can call from PHP or JS. How can I get this?

    Read the article

  • Open Source Web-based CMS for writing and managing API documentation

    - by netcoder
    This is a question that have somewhat been asked before (i.e.: How to manage an open source project's documentation). However, my question is a little different because: We're not developing open source software, but proprietary software The documentation has to be hand-written, because we do not want to publish the actual software API documentation, but only the public API documentation I do want developers and project managers to write the documentation collaboratively Obviously, wikis are a solution, but they're very generic. I'm looking for a more specialized tool for this job. I've looked around and found a few like Adobe Robohelp, SaaS solutions and such, but I'd like to know if any open source software exists for that purpose. Do you know any Open Source Web-based CMS for writing and managing API and software documentation?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >