Search Results

Search found 25570 results on 1023 pages for 'low level api'.

Page 115/1023 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Using the Search API with Sharepoint Foundation 2010 - 0 results

    - by MB
    I am a sharepoint newbee and am having trouble getting any search results to return using the search API in Sharepoint 2010 Foundation. Here are the steps I have taken so far. The Service Sharepoint Foundation Search v4 is running and logged in as Local Service Under Team Site - Site Settings - Search and Offline Availability, Indexing Site Content is enabled. Running the PowerShell script Get-SPSearchServiceInstance returns TypeName : SharePoint Foundation Search Description : Search index file on the search server Id : 91e01ce1-016e-44e0-a938-035d37613b70 Server : SPServer Name=V-SP2010 Service : SPSearchService Name=SPSearch4 IndexLocation : C:\Program Files\Common Files\Microsoft Shared\Web Server Exten sions\14\Data\Applications ProxyType : Default Status : Online When I do a search using the search textbox on the team site I get a results as I would expect. Now, when I try to duplicate the search results using the Search API I either receive an error or 0 results. Here is some sample code: using Microsoft.SharePoint.Search.Query; using (var site = new SPSite(_sharepointUrl, token)) { // FullTextSqlQuery fullTextSqlQuery = new FullTextSqlQuery(site) { QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope() WHERE \"scope\"='All Sites' AND CONTAINS('\"{0}\"')", searchPhrase), //QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope()", searchPhrase), TrimDuplicates = true, StartRow = 0, RowLimit = 200, ResultTypes = ResultType.RelevantResults //IgnoreAllNoiseQuery = false }; ResultTableCollection resultTableCollection = fullTextSqlQuery.Execute(); ResultTable result = resultTableCollection[ResultType.RelevantResults]; DataTable tbl = new DataTable(); tbl.Load(result, LoadOption.OverwriteChanges); } When the scope is set to All Sites I retrieve an error about the search scope not being available. Other search just return 0 results. Any ideas about what I am doing wrong?

    Read the article

  • twython search api rate limit: Header information will not be updated

    - by user2715478
    I want to handle the Search-API rate limit of 180 requests / 15 minutes. The first solution I came up with was to check the remaining requests in the header and wait 900 seconds. See the following snippet: results = search_interface.cursor(search_interface.search, q=k, lang=lang, result_type=result_mode) while True: try: tweet = next(results) if limit_reached(search_interface): sleep(900) self.writer(tweet) def limit_reached(search_interface): remaining_rate = int(search_interface.get_lastfunction_header('X-Rate-Limit-Remaining')) return remaining_rate <= 2 But it seems, that the header information are not reseted to 180 after it reached the two remaining requests. The second solution I came up with was to handle the twython exception for rate limitation and wait the remaining amount of time: results = search_interface.cursor(search_interface.search, q=k, lang=lang, result_type=result_mode) while True: try: tweet = next(results) self.writer(tweet) except TwythonError as inst: logger.error(inst.msg) wait_for_reset(search_interface) continue except StopIteration: break def wait_for_reset(search_interface): reset_timestamp = int(search_interface.get_lastfunction_header('X-Rate-Limit-Reset')) now_timestamp = datetime.now().timestamp() seconds_offset = 10 t = reset_timestamp - now_timestamp + seconds_offset logger.info('Waiting {0} seconds for Twitter rate limit reset.'.format(t)) sleep(t) But with this solution I receive this message INFO: Resetting dropped connection: api.twitter.com" and the loop will not continue with the last element of the generator. Have somebody faced the same problems? Regards.

    Read the article

  • Resource mapping in a Ruby on Rails URL (RESTful API)

    - by randombits
    I'm having a bit of difficulty coming up with the right answer to this, so I will solicit my problem here. I'm working on a RESTFul API. Naturally, I have multiple resources, some of which consist of parent to child relationships, some of which are stand alone resources. Where I'm having a bit of difficulty is figuring out how to make things easier for the folks who will be building clients against my API. The situation is this. Hypothetically I have a 'Street' resource. Each street has multiple homes. So Street :has_many to Homes and Homes :belongs_to Street. If a user wants to request an HTTP GET on a specific home resource, the following should work: http://mymap/streets/5/homes/10 That allows a user to get information for a home with the id 10. Straight forward. My question is, am I breaking the rules of the book by giving the user access to: http://mymap/homes/10 Technically that home resource exists on its own without the street. It makes sense that it exists as its own entity without an encapsulating street, even though business logic says otherwise. What's the best way to handle this?

    Read the article

  • Timing issues with playback of the HTML5 Audio API

    - by pat
    I'm using the following code to try to play a sound clip with the HTML5 Audio API: HTMLAudioElement.prototype.playClip = function(startTime, stopTime) { this.stopTime = stopTime; this.currentTime = startTime; this.play(); $(this).bind('timeupdate', function(){ if (this.ended || this.currentTime >= stopTime) { this.pause(); $(this).unbind('timeupdate'); } }); } I utilize this new playClip method as follows. First I have a link with some data attributes: <a href=# data-stop=1.051 data-start=0.000>And then I was thinking,</a> And finally this bit of jQuery which runs on $(document).ready to hook up a click on the link with the playback: $('a').click(function(ev){ $('a').click(function(ev){ var start = $(this).data('start'), stop = $(this).data('stop'), audio = $('audio').get(0), $audio = $(audio); ev.preventDefault(); audio.playClip(start,stop); }) This approach seems to work, but there's a frustrating bug: sometimes, the playback of a given clip plays beyond the correct data-stop time. I suspect it could have something to do with the timing of the timeupdate event, but I'm no JS guru and I don't know how to begin debugging the problem. Here are a few clues I've gathered: The same behavior appears to come up in both FF and Chrome. The playback of a given clip actually seems to vary a bit -- if I play the same clip a couple times in a row, it may over-play a different amount of time on each playing. Is the problem here the inherent accuracy of the Audio API? My app needs milliseconds. Is there a problem with the way I'm using jQuery to bind and unbind the timeupdate event? I tried using the jQuery-less approach with addEventListener but I couldn't get it to work. Thanks in advance, I would really love to know what's going wrong.

    Read the article

  • Confused about Android API's and compatability

    - by Keith
    I have purchased an HTC Incredible and have dived into the world of android! Only to find myself totally confused about the API levels and backward compatibility. My device runs the 2.1 OS, but I know that most of the devices out there run 1.5 or 1.6; and soon the 2.2 OS will be running on new devices. The SDK has gone through such enormous changes, that even constants have been renamed (from VIEW_ACTION to ACTION_VIEW for example). Methods have been added and removed (onPause replacing the earlier call, etc al). So, If I want to write an application that will work from 1.6+, does that mean I have to install and write my code using the 1.6 API; then test on later versions? Or can I write using the 2.1 SDK and just set the minSDK level and not use "new" features? I have never worked with an SDK that changes SO drastically from release to release! So I am not sure what to do.... I read through an article on the Android Development site(and this posting on stack overflow that references it: http://stackoverflow.com/questions/2076150/should-a-legacy-android-application-be-rebuilt-using-sdk-2-1), but it was still not very clear to me. Any help would be appreciated

    Read the article

  • Google Doclist API : bug in revision history?

    - by Jerbome
    I'm trying to manage revisions of files (not documents) stored in google docs programatically using gdata document list java library v3. I can create files and revisions using this tool : I can see them in the web UI. The thing is : the content of my revisions seems wrong. Here is my test protocol : I create a plain text file with "Hello World" in it. I upload it to gdocs without converting it. I create a revision of this file, its content changing to "Content of the second version" I create another revision, its content is now "Content of the third version" At each step, I check the content of each revision, using my app AND using the web UI. Here is what I get : First step : no problem i see one version containing the "Hello world" text Second step : no problem either, i see 2 versions, containing Hello World for the first one and Content of the second version for the second. Third step : here the problems comes. I see my 3 versions, but only the third and last seems to be correct. when i download the second version, the content is "Content of the second versio" (not a typo, it misses the 'n'). And i cannot even download the initial version, it seems to timeout. Important thing : I did not have this problem three weeks ago, my revision management worked well. I have no idea of what happens there, except it seems to be server-related, as the problem is seen either with my app or the google native webapp. Last thing : I tried using the google drive API as gdocs had been merged with drive. When i request revisions of my file, the API returns me an error saying that revisions are not supported for files, even if i can see them in the UI. I tried on converted documents, it worked. I'm looking for a workaround for this problem. Has anyone ever encountered such a problem ? Thanks in advance, Jérôme

    Read the article

  • ESPN API - How can I retrieve college basketball conferences using the Teams API?

    - by nomad
    The support forums on ESPN.com recommend using Stack Overflow with the ESPN tag. That's why I'm here. I'm trying to obtain a list of all NCAA college basketball teams using ESPN's Teams API. I started with this GET request: http://api.espn.com/v1/sports/basketball/mens-college-basketball/teams?apikey=MY_API_KEY That gave me a list of teams, but many of them are missing. For example, there is no Nebraska. So then I thought that maybe I need to get a list of teams by conference. So I read this in the documentation: GROUPS: Allows for filtering by "group" or division, e.g. AL East, NFC South, etc. For group IDs and their corresponding values, make a request to http://developer.espn.com/v1/{resource}/leagues. Not applicable to golf and tennis. So then I try to make a request to `http://developer.espn.com/v1/basketball/mens-college-basketball/leagues?apikey=MY_API_KEY' and it says the page does not exist. Is this a bug or user error?

    Read the article

  • OpenCalais API using jQuery

    - by Varun
    Hi I've wanted to do some language processing for an application and have been trying to use the OpenCalais API. The call to the API works perfectly and returns data. The problem is that even though I can see the data in the firebug, I cannot access it because the callback never triggers. Details: the call requires callback=? as it is a JSONP request. so callback=foo SHOULD call the foo method once the data is returned, it doesn't. in fact, if callback is anything other than ? the call fails and doesn't return any data. (i've also checked JSLint to make sure the returned JSON is valid). tried using $.ajax instead of $.getJSON so that I can get an error callback, but neither the success or error callbacks fire. in firebug, usually when you make a JSON request, the request shows up in the console. In this case, the request doesn't show up in the console, but instead shows in the "Net" tab in firebug...dunno what that means, but somehow the browser doesn't think its an XHR request. any ideas? thanks.

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Importing tab based outline txt file into Word outline

    - by Bernard Vander Beken
    Given a text file containing and outline with tabs to indent each level, I would like to import this into a Word 2007 document so that the each indentation level is converted to a H1, H2, etc heading level. I tried copy pasting the text into the outline view and opening the text file via File, Open. Both did not give the expected result. Level 1 Level 2 Level 3 Level 1 Level 2 Note: I am using spaces instead of tabs to indent this sample.

    Read the article

  • IP6tables blocks INPUT? can't connect with youtube API

    - by klaas
    I thought to have a simple ipv6 firewall, but it turned out to be hell. Somehow I really can't connect with any ipv6 from my machine unless I set INPUT Policy to ACCEPT. Below my current ip6tables ip6tables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all anywhere anywhere state RELATED,ESTABLISHED ACCEPT ipv6-icmp anywhere anywhere ACCEPT tcp anywhere anywhere tcp dpt:http ACCEPT tcp anywhere anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination If I try to connect with any ipv6 adres it doesn't work? telnet gdata.youtube.com 80 Trying 2a00:1450:4013:c00::76... OR telnet gdata.youtube.com 443 Trying 2a00:1450:4013:c00::76... When I set: ip6tables -P INPUT ACCEPT It works.. but then.. well then everything is open? what is going on? Help?

    Read the article

  • firefox addon f@stestfox API sending/collecting data?

    - by Richard
    System: ubuntu64/firefox24.0 object: addon "f@stestfox". Its a nice in-browser search tool and more. Problematic: is the way the program handles the search queries. when I use a search shortcut, burpsuite says: request to msgs.smarterfox.com: 80 GET /log_msg?name=popup_bubble_searched&search_engine_title=Search%20Startpage&source=FastestFox&redirect_to=https%3A%2F%2Fstartpage.com%2Fdo%2Fsearch%3Fcmd%3Dprocess_search%26cat%3Dweb%26query%3Dnginx%26language%3Denglish%26no_sugg%3D1%26ff%3D%26abp%3D-1&rand=856827465 HTTP/1.1 Host: msgs.smarterfox.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive once I saw a unique identifier (installation time?) was send with the request to the server. Am I right, that the addon sends the website I am looking at to the server? Sometimes I only mark text(ip adress or link) and the addon send this data? seriosly? I did: search for the url in the code, but I dont speak java. And I am not sure, if the data from the request can actually be used for tracking :) question: I want the awesome features of the addon, without connecting to their server: marked text should be send only to the searchmachines. what should I do next? thank you.

    Read the article

  • crash with CoUnintialize() api of COM

    - by jbp117
    My main process(main.exe) initilaized COM library and created a thread which creates a new process(p1.exe) this new process is again initializing COM library and after making all references as zero unintialized COM here.. the unintailezed COM in the main process (i.e main.exe) also.... When i run p1.exe individually it is successful.. but it is crashing when i create a process for p1.exe from main.exe

    Read the article

  • Proxy settings in Java mail API

    - by coder
    I've written a piece of java code where user1 sends email to user2. I'm behind a proxy and hence I'm getting a javax.mail.MessagingException. How do I solve this problem? Here is the code- import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class Mail { public static void main(String[] args) { final String username = "[email protected]"; final String password = "abc"; Properties props = new Properties(); props = System.getProperties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.smtp.port", "587"); Session session = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("[email protected]")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("[email protected]")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } }

    Read the article

  • methods for preventing large scale data scraping from REST api

    - by Simon Kenyon Shepard
    I know the immediate answer to this is going to be there is no 100% reliable method of doing this. But I'd like to create a question that details the different possibilities, the difficulty of implementing them and success rates. I would like to go from simple software ip/request speed analysis to high end sophisticated soft/hardware tools, e.g. neural networks. With a goal of predicting and preventing bogus requests and attempts to scrape the service. Many Thanks.

    Read the article

  • use nginx proxy_pass and rewrite to hide third party api credentials?

    - by brakertech
    Objective: I'd like to have nginx hide calls to a third party api Example Request: http://example.com/myapi/60601 Nginx rewritten proxy_pass request calls: http://api.remix.bestbuy.com/v1/stores(area(60601,10))?show=storeId,name&apiKey=redacted_24_character_string Code i've tried: This works but not for parameters location ^~ /myapi/ { rewrite ^/myapi/(.*) /api/check/$1/key/e95fad09aa5091b7734d1a268b53cef5 break; proxy_pass http://api.server.com/; }

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • Getting a new session key after Facebook offline_access permission

    - by Richard
    I have a mobile application that I'm using with Facebook connect. I'm having trouble getting an offline_access session key after a user has granted extended permissions. Here's the user flow: User goes to my site for the first time I send them to m.facebook.com/tos.php? and pass my api key and secret The user logs in using Facebook connect Facebook returns them to a page in my site, mysite/login-success.php with an auth_token in the query string On mysite/login-success.php I instantiate the FB api client and check to see if I already have an offline_access session key for them: $facebook = new Facebook($appapikey, $appsecret); If they haven't already provided offline_access FB gives me a temporary session key I need to get offline_access permission from the user so I forward them on to www.facebook.com/connect/prompt_permissions.php? and pass offline_access in the querystring. The user authorizes offline_access and get forwarded to mysite/permissions-success.php The problem I'm having is that after instantiating the API client on permissions-success.php the session key I have is still the temporary session key, not a new offline_access session key. The only way I've found to get the offline_access key is to delete all cookies for the user and then have them login again using Facebook connect. A fairly poor user experience. Can anyone shed some light on how to use the Facebook api to generate a new session key even if one already exists (in my case a temporary session key)?

    Read the article

  • PHP Facebook Cronjob with offline access

    - by Mohamed Salem
    1:the code to greet the user, ask for his permission and store his session data so that we can use a cronjob with his session data afterwards. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "password"; $db_name = "databasename"; #go to line 85, the script actually starts there mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); #you have to create a database to store session values. #if you do not know what columns there should be look at line 76 to see column names. #make them all varchars # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => '121036530138', 'secret' => '9bbec378147064', 'cookie' => false,)); # Lets set up the permissions we need and set the login url in case we need it. $par['req_perms'] = "friends_about_me,friends_education_history,friends_likes, friends_interests,friends_location,friends_religion_politics, friends_work_history,publish_stream,friends_activities, friends_events, friends_hometown,friends_location ,user_interests,user_likes,user_events, user_about_me,user_status,user_work_history,read_requests, read_stream,offline_access,user_religion_politics,email,user_groups"; $loginUrl = $facebook->getLoginUrl($par); function save_session($session){ global $facebook; # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$session['uid']); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a stored session, but is it valid? echo " We have a session, but is it valid?"; try { $attachment = array('access_token' => $session_id[0]); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " our old session is not valid, let's delete saved invalid session data "; $res = mysql_query("delete from facebook_user WHERE uid =".$session['uid']); #save new good session #to see what is our session data: print_r($session); if (is_array($session)) { $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','" . $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } # this should never ever happen echo " Something is terribly wrong: Our old session was bad, and now we cannot get the new session"; return; } echo " Our old stored session is valid "; return $session_id[0]; } else { echo " no stored session, this means the user never subscribed to our application before. "; # let's store the session $session = $facebook->getSession(); if (is_array($session)) { # Yes we have a session! so lets store it! $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','". $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } } } #this is the first meaningful line of this script. $session = $facebook->getSession(); # Is the user already subscribed to our application? if ( is_null($session) ) { # no he is not #send him to permissions page header( "Location: $loginUrl" ); } else { #yes, he is already subscribed, or subscribed just now #in case he just subscribed now, save his session information $access_token=save_session($session); echo " everything is ok"; # write your code here to do something afterwards } ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/indexx.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Fatal error: Call to undefined method Facebook::getSession() in /home/content/28/9687528/html/ss/src/indexx.php on line 86 2:A cronjob template that reads the stored session of a user from database, uses his session data to work on his behalf, like reading status posts or publishing posts etc. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "pass"; $db_name = "database"; # Lets connect to the Database and set up the table $link = mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => 'appid', 'secret' => 'secret', 'cookie' => false, )); function get_check_session($uidCheck){ global $facebook; # This function basically checks for a stored session and if we have one it returns it # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$uidCheck); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a session # but, is it valid? try { $attachment = array('access_token' => $session_id[0],); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " User ".$uidCheck." removed the application, or there is some other access problem. "; # let's delete stored data $res = mysql_query("delete from facebook_user where WHERE uid =".$uidCheck); return; } return $session_id[0]; } else { # "no stored session"; echo " error:newsFeedcrontab.php No stored sessions. This should not have happened "; } } # get all users that have given us offline access $users = getUsers(); foreach($users as $user){ # now for each user, check if they are still subscribed to our application echo " Checking user".$user; $access_token=get_check_session($user); # If we've not got an access_token we actually need to login. # but in the crontab, we just log the error, there is no way we can find the user to give us permission here. if ( is_null($access_token) ) { echo " error: newsFeedcrontab.php There is no access token for the user ".$user." "; } else { #we are going to read the newsfeed of user. There are user's friends' posts in this newsfeed try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/home', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get feed of ".$user.$e; } #do something with the result here #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "home" link under connections #we can also read the home of user. Home is the wall of the user who has given us offline access. try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/feed', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get wall of ".$user.$e; } #do something with the result here # #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "feed" link under connections } } function getUsers(){ $sql = "SELECT distinct(uid) from facebook_user Where 1"; $result = mysql_query($sql); while($row = mysql_fetch_array($result)){ $rows [] = $row['uid']; } print_r($rows); return $rows; } mysql_close($link); ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/cron.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/content/28/9687528/html/ss/src/cron.php on line 110 Warning: Invalid argument supplied for foreach() in /home/content/28/9687528/html/ss/src/cron.php on line 64

    Read the article

  • Is there a case for parameterising using Abstract classes rather than Interfaces?

    - by Chris
    I'm currently developing a component based API that is heavily stateful. The top level components implement around a dozen interfaces each. The stock top-level components therefore sit ontop of a stack of Abstract implementations which in turn contain multiple mixin implementations and implement multiple mixin interfaces. So far, so good (I hope). The problem is that the base functionality is extremely complex to implement (1,000s of lines in 5 layers of base classes) and therefore I do not wish for component writers to implement the interfaces themselves but rather to extend my base classes (where all the boiler plate code is already written). If the API therefore accepts interfaces rather than references to the Abstract implementation that I wish for component writers to extends, then I have a risk that the implementer will not perform the validation that is both required and assumed by other areas of code. Therefore, my question is, is it sometimes valid to paramerise API methods using an abstract implementation reference rather than a reference to the interface(s) that it implements? Do you have an example of a well-designed API that uses this technique or am I trying to talk myself into bad-practice?

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • photshop: why low quality jpg saved as high quality increases the file size?

    - by Alex Angelico
    i have a low quality background about 85kb. If I open the file in Photoshop and save it, I have to save this image as 100% quality, the file size increases to 640kb. This doesn't make sense, the image si already compressed, the quality cannot be better than the source, so the 100% quality while saving should produce THE SAME FILE OPENED. The problem is, I have this background and want to add a logo image over it, and then save it. But If I do so, i have to save this with 10% quality, and the logo looks horrible. If I save this to 100% quality, the file size is huge... How can I achive this?

    Read the article

  • Why are cryptic short identifiers still so common in low-level programming?

    - by romkyns
    There used to be very good reasons for keeping instruction / register names short. Those reasons no longer apply, but short cryptic names are still very common in low-level programming. Why is this? Is it just because old habits are hard to break, or are there better reasons? For example: Atmel ATMEGA32U2 (2010?): TIFR1 (instead of TimerCounter1InterruptFlag), ICR1H (instead of InputCapture1High), DDRB (instead of DataDirectionPortB), etc. .NET CLR instruction set (2002): bge.s (instead of branch-if-greater.signed), etc. Aren't the longer, non-cryptic names easier to work with?

    Read the article

  • Create a screencast in a low end PC, but fast (maybe by sacrificing compression ?)

    - by josinalvo
    As the title suggests, I am asking a lot. We've been trying to generate some screencasts on my eeepc. recordmydesktop is doing the job decently, but only if allowed time to "compile" the video afterwards. If we ask it to do "on the fly", video and audio get out of sync. Now, we are creating many screencasts as practice (and like to watch them after, to criticize). Reducing quality is undesirable, because eventually a good practice run becomes the one we'll release. So we'd like a way to do screencasts "on the fly", with decent quality, on the low end machine. As nothing is ever free, we are willing to sacrifice: we don't care too much about compression: 20GB for a 15min video is acceptable

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >