Search Results

Search found 11710 results on 469 pages for 'pivotaltracker api'.

Page 77/469 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Android SMS API

    - by Schildmeijer
    I know that the SMS content provider is not part of the public API (at least not documented), but if I understand correctly it's still possible to use many of the SMS features as long as you know how to use the API(?). E.g it's pretty straightforward to insert an SMS into your inbox: ContentValues values = new ContentValues(); values.put("address", "+457014921911"); contentResolver.insert(Uri.parse("content://sms"), values); Unfortunately this does not trigger the standard "new-SMS-in-your-inbox" notification. Is it possible to trigger this manually? Edit: AFAIK the "standard mail application (Messaging)" in Android is listening for incoming SMSes using the android.permission.RECEIVE_SMS permission. And then, when a new SMS has arrived, a status bar notification is inserted with a "special" notification id. So one solution to my problem (stated above) could be to find, and send the correct broadcast intent; something like "NEW SMS HAS ARRIVED"-intent. Edit: Downloaded a third party messaging application (chompsms) from Android market. This application satisfies my needs better. When i execute the code above the chompsms notice the new sms and shows the "standard status bar notification". So I would say that the standard Android Messaging application is not detecting sms properly? Or am I wrong?

    Read the article

  • I am Unable to Post Xml to Linkedin Share API

    - by Vijesh V.Nair
    I am using Delphi 2010, with Indy 10.5.8(svn version) and oAuth.pas from chuckbeasley. I am able to collect token with app key and App secret, authorize token with a web page and Access the final token. Now I have to post a status with Linkedin’s Share API. I am getting a unauthorized response. My request and responses are giving bellow. Request, POST /v1/people/~/shares HTTP/1.0 Content-Encoding: utf-8 Content-Type: text/xml; charset=us-ascii Content-Length: 999 Authorization: OAuth oauth_consumer_key="xxx",oauth_signature_method="HMAC-SHA1",oauth_timestamp="1340438599",oauth_nonce="BB4C78E0A6EB452BEE0FAA2C3F921FC4",oauth_version="1.0",oauth_token="xxx",oauth_signature="Pz8%2FPz8%2FPz9ePzkxPyc%2FDD82Pz8%3D" Host: api.linkedin.com Accept: text/html, */* Accept-Encoding: identity User-Agent: Mozilla/3.0 (compatible; Indy Library) %3C%3Fxml+version=%25221.0%2522%2520encoding%253D%2522UTF-8%2522%253F%253E%253Cshare%253E%253Ccomment%253E83%2525%2520of%2520employers%2520will%2520use%2520social%2520media%2520to%2520hire%253A%252078%2525%2520LinkedIn%252C%252055%2525%2520Facebook%252C%252045%2525%2520Twitter%2520%255BSF%2520Biz%2520Times%255D%2520http%253A%252F%252Fbit.ly%252FcCpeOD%253C%252Fcomment%253E%253Ccontent%253E%253Ctitle%253ESurvey%253A%2520Social%2520networks%2520top%2520hiring%2520tool%2520-%2520San%2520Francisco%2520Business%2520Times%253C%252Ftitle%253E%253Csubmitted-url%253Ehttp%253A%252F%252Fsanfrancisco.bizjournals.com%252Fsanfrancisco%252Fstories%252F2010%252F06%252F28%252Fdaily34.html%253C%252Fsubmitted-url%253E%253Csubmitted-image-url%253Ehttp%253A%252F%252Fimages.bizjournals.com%252Ftravel%252Fcityscapes%252Fthumbs%252Fsm_sanfrancisco.jpg%253C%252Fsubmitted-image-url%253E%253C%252Fcontent%253E%253Cvisibility%253E%253Ccode%253Eanyone%253C%252Fcode%253E%253C%252Fvisibility%253E%253C%252Fshare%253E Response, HTTP/1.1 401 Unauthorized Server: Apache-Coyote/1.1 x-li-request-id: K14SWRPEPL Date: Sat, 23 Jun 2012 08:07:17 GMT Vary: * x-li-format: xml Content-Type: text/xml;charset=UTF-8 Content-Length: 341 Connection: keep-alive <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <error> <status>401</status> <timestamp>1340438838344</timestamp> <request-id>K14SWRPEPL</request-id> <error-code>0</error-code> <message>[unauthorized]. OAU:xxx|nnnnn|*01|*01:1340438599:Pz8/Pz8/Pz9ePzkxPyc/DD82Pz8=</message> </error> Please help. Regards, Vijesh Nair

    Read the article

  • HttpError 502 with Google Wave Active Robot API fetch_wavelet()

    - by Drew LeSueur
    I am trying to use the Google Wave Active Robot API fetch_wavelet() and I get an HTTP 502 error example: from waveapi import robot import passwords robot = robot.Robot('gae-run', 'http://images.com/fake-image.jpg') robot.setup_oauth(passwords.CONSUMER_KEY, passwords.CONSUMER_SECRET, server_rpc_base='http://www-opensocial.googleusercontent.com/api/rpc') wavelet = robot.fetch_wavelet('googlewave.com!w+dtuZi6t3C','googlewave.com!conv+root') robot.submit(wavelet) self.response.out.write(wavelet.creator) But the error I get is this: Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 511, in __call__ handler.get(*groups) File "/base/data/home/apps/clstff/gae-run.342467577023864664/main.py", line 23, in get robot.submit(wavelet) File "/base/data/home/apps/clstff/gae-run.342467577023864664/waveapi/robot.py", line 486, in submit res = self.make_rpc(pending) File "/base/data/home/apps/clstff/gae-run.342467577023864664/waveapi/robot.py", line 251, in make_rpc raise IOError('HttpError ' + str(code)) IOError: HttpError 502 Any ideas? Edit: When [email protected] is not a member of the wave I get the correct error message Error: RPC Error500: internalError: [email protected] is not a participant of wave id: [WaveId:googlewave.com!w+Pq1HgvssD] wavelet id: [WaveletId:googlewave.com!conv+root]. Unable to apply operation: {'method':'robot.fetchWave','id':'655720','waveId':'googlewave.com!w+Pq1HgvssD','waveletId':'googlewave.com!conv+root','blipId':'null','parameters':{}} But when [email protected] is a member of the wave I get the http 502 error. IOError: HttpError 502

    Read the article

  • Worklight client-side API to overrideBackButton doesn't work

    - by user2503429
    I want to make something happen when back button is pressed so I put this code in Myhtml.js file : function wlCommonInit(){ /* * Application is started in offline mode as defined by a connectOnStartup property in initOptions.js file. * In order to begin communicating with Worklight Server you need to either: * * 1. Change connectOnStartup property in initOptions.js to true. * This will make Worklight framework automatically attempt to connect to Worklight Server as a part of application start-up. * Keep in mind - this may increase application start-up time. * * 2. Use WL.Client.connect() API once connectivity to a Worklight Server is required. * This API needs to be called only once, before any other WL.Client methods that communicate with the Worklight Server. * Don't forget to specify and implement onSuccess and onFailure callback functions for WL.Client.connect(), e.g: * * WL.Client.connect({ * onSuccess: onConnectSuccess, * onFailure: onConnectFailure * }); * */ // Common initialization code goes here } WL.App.overrideBackButton(backFunc); function backFunc(){ alert('You will back to previous page'); } but after build and deploy the app and I running it, nothing happened after I pressed back button, anybody have the solution ?

    Read the article

  • secure rest API for running user "apps" in an iframe

    - by Brian Armstrong
    I want to let users create "apps" (like Facebook apps) for my website, and I'm trying to figure out the best way to make it secure. I have a REST api i want to run the user apps in an iframe on my own site (not a safe markup language like FBML) I was first looking at oAuth but this seems overkill for my solution. The "apps" don't need to be run on external sites or in desktop apps or anything. The user would stay on my site at all times but see the user submitted "app" through the iframe. So when I call the app the first time through the iframe, I can pass it some variables so it knows which logged in user is using it on my site. It can then use this user session in it's own API calls to customize the display. If the call is passed in the clear, I don't want someone to be able to intercept the session and impersonate the user. Does anyone know a good way to do this or good write up on it? Thanks!

    Read the article

  • Using the Search API with Sharepoint Foundation 2010 - 0 results

    - by MB
    I am a sharepoint newbee and am having trouble getting any search results to return using the search API in Sharepoint 2010 Foundation. Here are the steps I have taken so far. The Service Sharepoint Foundation Search v4 is running and logged in as Local Service Under Team Site - Site Settings - Search and Offline Availability, Indexing Site Content is enabled. Running the PowerShell script Get-SPSearchServiceInstance returns TypeName : SharePoint Foundation Search Description : Search index file on the search server Id : 91e01ce1-016e-44e0-a938-035d37613b70 Server : SPServer Name=V-SP2010 Service : SPSearchService Name=SPSearch4 IndexLocation : C:\Program Files\Common Files\Microsoft Shared\Web Server Exten sions\14\Data\Applications ProxyType : Default Status : Online When I do a search using the search textbox on the team site I get a results as I would expect. Now, when I try to duplicate the search results using the Search API I either receive an error or 0 results. Here is some sample code: using Microsoft.SharePoint.Search.Query; using (var site = new SPSite(_sharepointUrl, token)) { // FullTextSqlQuery fullTextSqlQuery = new FullTextSqlQuery(site) { QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope() WHERE \"scope\"='All Sites' AND CONTAINS('\"{0}\"')", searchPhrase), //QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope()", searchPhrase), TrimDuplicates = true, StartRow = 0, RowLimit = 200, ResultTypes = ResultType.RelevantResults //IgnoreAllNoiseQuery = false }; ResultTableCollection resultTableCollection = fullTextSqlQuery.Execute(); ResultTable result = resultTableCollection[ResultType.RelevantResults]; DataTable tbl = new DataTable(); tbl.Load(result, LoadOption.OverwriteChanges); } When the scope is set to All Sites I retrieve an error about the search scope not being available. Other search just return 0 results. Any ideas about what I am doing wrong?

    Read the article

  • twython search api rate limit: Header information will not be updated

    - by user2715478
    I want to handle the Search-API rate limit of 180 requests / 15 minutes. The first solution I came up with was to check the remaining requests in the header and wait 900 seconds. See the following snippet: results = search_interface.cursor(search_interface.search, q=k, lang=lang, result_type=result_mode) while True: try: tweet = next(results) if limit_reached(search_interface): sleep(900) self.writer(tweet) def limit_reached(search_interface): remaining_rate = int(search_interface.get_lastfunction_header('X-Rate-Limit-Remaining')) return remaining_rate <= 2 But it seems, that the header information are not reseted to 180 after it reached the two remaining requests. The second solution I came up with was to handle the twython exception for rate limitation and wait the remaining amount of time: results = search_interface.cursor(search_interface.search, q=k, lang=lang, result_type=result_mode) while True: try: tweet = next(results) self.writer(tweet) except TwythonError as inst: logger.error(inst.msg) wait_for_reset(search_interface) continue except StopIteration: break def wait_for_reset(search_interface): reset_timestamp = int(search_interface.get_lastfunction_header('X-Rate-Limit-Reset')) now_timestamp = datetime.now().timestamp() seconds_offset = 10 t = reset_timestamp - now_timestamp + seconds_offset logger.info('Waiting {0} seconds for Twitter rate limit reset.'.format(t)) sleep(t) But with this solution I receive this message INFO: Resetting dropped connection: api.twitter.com" and the loop will not continue with the last element of the generator. Have somebody faced the same problems? Regards.

    Read the article

  • Resource mapping in a Ruby on Rails URL (RESTful API)

    - by randombits
    I'm having a bit of difficulty coming up with the right answer to this, so I will solicit my problem here. I'm working on a RESTFul API. Naturally, I have multiple resources, some of which consist of parent to child relationships, some of which are stand alone resources. Where I'm having a bit of difficulty is figuring out how to make things easier for the folks who will be building clients against my API. The situation is this. Hypothetically I have a 'Street' resource. Each street has multiple homes. So Street :has_many to Homes and Homes :belongs_to Street. If a user wants to request an HTTP GET on a specific home resource, the following should work: http://mymap/streets/5/homes/10 That allows a user to get information for a home with the id 10. Straight forward. My question is, am I breaking the rules of the book by giving the user access to: http://mymap/homes/10 Technically that home resource exists on its own without the street. It makes sense that it exists as its own entity without an encapsulating street, even though business logic says otherwise. What's the best way to handle this?

    Read the article

  • Timing issues with playback of the HTML5 Audio API

    - by pat
    I'm using the following code to try to play a sound clip with the HTML5 Audio API: HTMLAudioElement.prototype.playClip = function(startTime, stopTime) { this.stopTime = stopTime; this.currentTime = startTime; this.play(); $(this).bind('timeupdate', function(){ if (this.ended || this.currentTime >= stopTime) { this.pause(); $(this).unbind('timeupdate'); } }); } I utilize this new playClip method as follows. First I have a link with some data attributes: <a href=# data-stop=1.051 data-start=0.000>And then I was thinking,</a> And finally this bit of jQuery which runs on $(document).ready to hook up a click on the link with the playback: $('a').click(function(ev){ $('a').click(function(ev){ var start = $(this).data('start'), stop = $(this).data('stop'), audio = $('audio').get(0), $audio = $(audio); ev.preventDefault(); audio.playClip(start,stop); }) This approach seems to work, but there's a frustrating bug: sometimes, the playback of a given clip plays beyond the correct data-stop time. I suspect it could have something to do with the timing of the timeupdate event, but I'm no JS guru and I don't know how to begin debugging the problem. Here are a few clues I've gathered: The same behavior appears to come up in both FF and Chrome. The playback of a given clip actually seems to vary a bit -- if I play the same clip a couple times in a row, it may over-play a different amount of time on each playing. Is the problem here the inherent accuracy of the Audio API? My app needs milliseconds. Is there a problem with the way I'm using jQuery to bind and unbind the timeupdate event? I tried using the jQuery-less approach with addEventListener but I couldn't get it to work. Thanks in advance, I would really love to know what's going wrong.

    Read the article

  • Confused about Android API's and compatability

    - by Keith
    I have purchased an HTC Incredible and have dived into the world of android! Only to find myself totally confused about the API levels and backward compatibility. My device runs the 2.1 OS, but I know that most of the devices out there run 1.5 or 1.6; and soon the 2.2 OS will be running on new devices. The SDK has gone through such enormous changes, that even constants have been renamed (from VIEW_ACTION to ACTION_VIEW for example). Methods have been added and removed (onPause replacing the earlier call, etc al). So, If I want to write an application that will work from 1.6+, does that mean I have to install and write my code using the 1.6 API; then test on later versions? Or can I write using the 2.1 SDK and just set the minSDK level and not use "new" features? I have never worked with an SDK that changes SO drastically from release to release! So I am not sure what to do.... I read through an article on the Android Development site(and this posting on stack overflow that references it: http://stackoverflow.com/questions/2076150/should-a-legacy-android-application-be-rebuilt-using-sdk-2-1), but it was still not very clear to me. Any help would be appreciated

    Read the article

  • Google Doclist API : bug in revision history?

    - by Jerbome
    I'm trying to manage revisions of files (not documents) stored in google docs programatically using gdata document list java library v3. I can create files and revisions using this tool : I can see them in the web UI. The thing is : the content of my revisions seems wrong. Here is my test protocol : I create a plain text file with "Hello World" in it. I upload it to gdocs without converting it. I create a revision of this file, its content changing to "Content of the second version" I create another revision, its content is now "Content of the third version" At each step, I check the content of each revision, using my app AND using the web UI. Here is what I get : First step : no problem i see one version containing the "Hello world" text Second step : no problem either, i see 2 versions, containing Hello World for the first one and Content of the second version for the second. Third step : here the problems comes. I see my 3 versions, but only the third and last seems to be correct. when i download the second version, the content is "Content of the second versio" (not a typo, it misses the 'n'). And i cannot even download the initial version, it seems to timeout. Important thing : I did not have this problem three weeks ago, my revision management worked well. I have no idea of what happens there, except it seems to be server-related, as the problem is seen either with my app or the google native webapp. Last thing : I tried using the google drive API as gdocs had been merged with drive. When i request revisions of my file, the API returns me an error saying that revisions are not supported for files, even if i can see them in the UI. I tried on converted documents, it worked. I'm looking for a workaround for this problem. Has anyone ever encountered such a problem ? Thanks in advance, Jérôme

    Read the article

  • ESPN API - How can I retrieve college basketball conferences using the Teams API?

    - by nomad
    The support forums on ESPN.com recommend using Stack Overflow with the ESPN tag. That's why I'm here. I'm trying to obtain a list of all NCAA college basketball teams using ESPN's Teams API. I started with this GET request: http://api.espn.com/v1/sports/basketball/mens-college-basketball/teams?apikey=MY_API_KEY That gave me a list of teams, but many of them are missing. For example, there is no Nebraska. So then I thought that maybe I need to get a list of teams by conference. So I read this in the documentation: GROUPS: Allows for filtering by "group" or division, e.g. AL East, NFC South, etc. For group IDs and their corresponding values, make a request to http://developer.espn.com/v1/{resource}/leagues. Not applicable to golf and tennis. So then I try to make a request to `http://developer.espn.com/v1/basketball/mens-college-basketball/leagues?apikey=MY_API_KEY' and it says the page does not exist. Is this a bug or user error?

    Read the article

  • OpenCalais API using jQuery

    - by Varun
    Hi I've wanted to do some language processing for an application and have been trying to use the OpenCalais API. The call to the API works perfectly and returns data. The problem is that even though I can see the data in the firebug, I cannot access it because the callback never triggers. Details: the call requires callback=? as it is a JSONP request. so callback=foo SHOULD call the foo method once the data is returned, it doesn't. in fact, if callback is anything other than ? the call fails and doesn't return any data. (i've also checked JSLint to make sure the returned JSON is valid). tried using $.ajax instead of $.getJSON so that I can get an error callback, but neither the success or error callbacks fire. in firebug, usually when you make a JSON request, the request shows up in the console. In this case, the request doesn't show up in the console, but instead shows in the "Net" tab in firebug...dunno what that means, but somehow the browser doesn't think its an XHR request. any ideas? thanks.

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • IP6tables blocks INPUT? can't connect with youtube API

    - by klaas
    I thought to have a simple ipv6 firewall, but it turned out to be hell. Somehow I really can't connect with any ipv6 from my machine unless I set INPUT Policy to ACCEPT. Below my current ip6tables ip6tables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all anywhere anywhere state RELATED,ESTABLISHED ACCEPT ipv6-icmp anywhere anywhere ACCEPT tcp anywhere anywhere tcp dpt:http ACCEPT tcp anywhere anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination If I try to connect with any ipv6 adres it doesn't work? telnet gdata.youtube.com 80 Trying 2a00:1450:4013:c00::76... OR telnet gdata.youtube.com 443 Trying 2a00:1450:4013:c00::76... When I set: ip6tables -P INPUT ACCEPT It works.. but then.. well then everything is open? what is going on? Help?

    Read the article

  • firefox addon f@stestfox API sending/collecting data?

    - by Richard
    System: ubuntu64/firefox24.0 object: addon "f@stestfox". Its a nice in-browser search tool and more. Problematic: is the way the program handles the search queries. when I use a search shortcut, burpsuite says: request to msgs.smarterfox.com: 80 GET /log_msg?name=popup_bubble_searched&search_engine_title=Search%20Startpage&source=FastestFox&redirect_to=https%3A%2F%2Fstartpage.com%2Fdo%2Fsearch%3Fcmd%3Dprocess_search%26cat%3Dweb%26query%3Dnginx%26language%3Denglish%26no_sugg%3D1%26ff%3D%26abp%3D-1&rand=856827465 HTTP/1.1 Host: msgs.smarterfox.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive once I saw a unique identifier (installation time?) was send with the request to the server. Am I right, that the addon sends the website I am looking at to the server? Sometimes I only mark text(ip adress or link) and the addon send this data? seriosly? I did: search for the url in the code, but I dont speak java. And I am not sure, if the data from the request can actually be used for tracking :) question: I want the awesome features of the addon, without connecting to their server: marked text should be send only to the searchmachines. what should I do next? thank you.

    Read the article

  • crash with CoUnintialize() api of COM

    - by jbp117
    My main process(main.exe) initilaized COM library and created a thread which creates a new process(p1.exe) this new process is again initializing COM library and after making all references as zero unintialized COM here.. the unintailezed COM in the main process (i.e main.exe) also.... When i run p1.exe individually it is successful.. but it is crashing when i create a process for p1.exe from main.exe

    Read the article

  • Proxy settings in Java mail API

    - by coder
    I've written a piece of java code where user1 sends email to user2. I'm behind a proxy and hence I'm getting a javax.mail.MessagingException. How do I solve this problem? Here is the code- import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class Mail { public static void main(String[] args) { final String username = "[email protected]"; final String password = "abc"; Properties props = new Properties(); props = System.getProperties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.smtp.port", "587"); Session session = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("[email protected]")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("[email protected]")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } }

    Read the article

  • methods for preventing large scale data scraping from REST api

    - by Simon Kenyon Shepard
    I know the immediate answer to this is going to be there is no 100% reliable method of doing this. But I'd like to create a question that details the different possibilities, the difficulty of implementing them and success rates. I would like to go from simple software ip/request speed analysis to high end sophisticated soft/hardware tools, e.g. neural networks. With a goal of predicting and preventing bogus requests and attempts to scrape the service. Many Thanks.

    Read the article

  • use nginx proxy_pass and rewrite to hide third party api credentials?

    - by brakertech
    Objective: I'd like to have nginx hide calls to a third party api Example Request: http://example.com/myapi/60601 Nginx rewritten proxy_pass request calls: http://api.remix.bestbuy.com/v1/stores(area(60601,10))?show=storeId,name&apiKey=redacted_24_character_string Code i've tried: This works but not for parameters location ^~ /myapi/ { rewrite ^/myapi/(.*) /api/check/$1/key/e95fad09aa5091b7734d1a268b53cef5 break; proxy_pass http://api.server.com/; }

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • Getting a new session key after Facebook offline_access permission

    - by Richard
    I have a mobile application that I'm using with Facebook connect. I'm having trouble getting an offline_access session key after a user has granted extended permissions. Here's the user flow: User goes to my site for the first time I send them to m.facebook.com/tos.php? and pass my api key and secret The user logs in using Facebook connect Facebook returns them to a page in my site, mysite/login-success.php with an auth_token in the query string On mysite/login-success.php I instantiate the FB api client and check to see if I already have an offline_access session key for them: $facebook = new Facebook($appapikey, $appsecret); If they haven't already provided offline_access FB gives me a temporary session key I need to get offline_access permission from the user so I forward them on to www.facebook.com/connect/prompt_permissions.php? and pass offline_access in the querystring. The user authorizes offline_access and get forwarded to mysite/permissions-success.php The problem I'm having is that after instantiating the API client on permissions-success.php the session key I have is still the temporary session key, not a new offline_access session key. The only way I've found to get the offline_access key is to delete all cookies for the user and then have them login again using Facebook connect. A fairly poor user experience. Can anyone shed some light on how to use the Facebook api to generate a new session key even if one already exists (in my case a temporary session key)?

    Read the article

  • PHP Facebook Cronjob with offline access

    - by Mohamed Salem
    1:the code to greet the user, ask for his permission and store his session data so that we can use a cronjob with his session data afterwards. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "password"; $db_name = "databasename"; #go to line 85, the script actually starts there mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); #you have to create a database to store session values. #if you do not know what columns there should be look at line 76 to see column names. #make them all varchars # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => '121036530138', 'secret' => '9bbec378147064', 'cookie' => false,)); # Lets set up the permissions we need and set the login url in case we need it. $par['req_perms'] = "friends_about_me,friends_education_history,friends_likes, friends_interests,friends_location,friends_religion_politics, friends_work_history,publish_stream,friends_activities, friends_events, friends_hometown,friends_location ,user_interests,user_likes,user_events, user_about_me,user_status,user_work_history,read_requests, read_stream,offline_access,user_religion_politics,email,user_groups"; $loginUrl = $facebook->getLoginUrl($par); function save_session($session){ global $facebook; # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$session['uid']); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a stored session, but is it valid? echo " We have a session, but is it valid?"; try { $attachment = array('access_token' => $session_id[0]); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " our old session is not valid, let's delete saved invalid session data "; $res = mysql_query("delete from facebook_user WHERE uid =".$session['uid']); #save new good session #to see what is our session data: print_r($session); if (is_array($session)) { $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','" . $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } # this should never ever happen echo " Something is terribly wrong: Our old session was bad, and now we cannot get the new session"; return; } echo " Our old stored session is valid "; return $session_id[0]; } else { echo " no stored session, this means the user never subscribed to our application before. "; # let's store the session $session = $facebook->getSession(); if (is_array($session)) { # Yes we have a session! so lets store it! $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','". $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } } } #this is the first meaningful line of this script. $session = $facebook->getSession(); # Is the user already subscribed to our application? if ( is_null($session) ) { # no he is not #send him to permissions page header( "Location: $loginUrl" ); } else { #yes, he is already subscribed, or subscribed just now #in case he just subscribed now, save his session information $access_token=save_session($session); echo " everything is ok"; # write your code here to do something afterwards } ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/indexx.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Fatal error: Call to undefined method Facebook::getSession() in /home/content/28/9687528/html/ss/src/indexx.php on line 86 2:A cronjob template that reads the stored session of a user from database, uses his session data to work on his behalf, like reading status posts or publishing posts etc. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "pass"; $db_name = "database"; # Lets connect to the Database and set up the table $link = mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => 'appid', 'secret' => 'secret', 'cookie' => false, )); function get_check_session($uidCheck){ global $facebook; # This function basically checks for a stored session and if we have one it returns it # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$uidCheck); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a session # but, is it valid? try { $attachment = array('access_token' => $session_id[0],); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " User ".$uidCheck." removed the application, or there is some other access problem. "; # let's delete stored data $res = mysql_query("delete from facebook_user where WHERE uid =".$uidCheck); return; } return $session_id[0]; } else { # "no stored session"; echo " error:newsFeedcrontab.php No stored sessions. This should not have happened "; } } # get all users that have given us offline access $users = getUsers(); foreach($users as $user){ # now for each user, check if they are still subscribed to our application echo " Checking user".$user; $access_token=get_check_session($user); # If we've not got an access_token we actually need to login. # but in the crontab, we just log the error, there is no way we can find the user to give us permission here. if ( is_null($access_token) ) { echo " error: newsFeedcrontab.php There is no access token for the user ".$user." "; } else { #we are going to read the newsfeed of user. There are user's friends' posts in this newsfeed try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/home', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get feed of ".$user.$e; } #do something with the result here #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "home" link under connections #we can also read the home of user. Home is the wall of the user who has given us offline access. try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/feed', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get wall of ".$user.$e; } #do something with the result here # #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "feed" link under connections } } function getUsers(){ $sql = "SELECT distinct(uid) from facebook_user Where 1"; $result = mysql_query($sql); while($row = mysql_fetch_array($result)){ $rows [] = $row['uid']; } print_r($rows); return $rows; } mysql_close($link); ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/cron.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/content/28/9687528/html/ss/src/cron.php on line 110 Warning: Invalid argument supplied for foreach() in /home/content/28/9687528/html/ss/src/cron.php on line 64

    Read the article

  • Is there a case for parameterising using Abstract classes rather than Interfaces?

    - by Chris
    I'm currently developing a component based API that is heavily stateful. The top level components implement around a dozen interfaces each. The stock top-level components therefore sit ontop of a stack of Abstract implementations which in turn contain multiple mixin implementations and implement multiple mixin interfaces. So far, so good (I hope). The problem is that the base functionality is extremely complex to implement (1,000s of lines in 5 layers of base classes) and therefore I do not wish for component writers to implement the interfaces themselves but rather to extend my base classes (where all the boiler plate code is already written). If the API therefore accepts interfaces rather than references to the Abstract implementation that I wish for component writers to extends, then I have a risk that the implementer will not perform the validation that is both required and assumed by other areas of code. Therefore, my question is, is it sometimes valid to paramerise API methods using an abstract implementation reference rather than a reference to the interface(s) that it implements? Do you have an example of a well-designed API that uses this technique or am I trying to talk myself into bad-practice?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >