Search Results

Search found 14767 results on 591 pages for 'twitter api'.

Page 269/591 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Compiling zip component for PHP 5.2.11 in MAMP PRO

    - by Zlatoroh
    Helo I installed MAMP PRO on my Macbook Pro (10.6) some time ago. Now I would like to use zip functions in php. I found that I must add zip.so to my extension folder and edited php.ini. On my computer I have two different versions of PHP one in MAMP folder and other in user/lib which was pre-installed on my system. Now I wish to compile my zip library for MAMP version. I got zip sources for my version of PHP then in terminal called function /Applications/MAMP/bin/php5/bin/phpize so it uses mamp php version ./configure make then I moved compile zip.so to extensions/no-debug-non-zts-20060613. When MAMP is launched it returns this error: [11-Apr-2010 16:33:27] PHP Warning: PHP Startup: zip: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 Can some body explain to me how to do this the right way.

    Read the article

  • Supporting different locale regions using Rails i18n

    - by Olly
    I'm using the standard Rails I18n API to localise some of our views. This is working really well, but we now have a few use cases for regional changes to the en locale. The API guide mentions that this isn't supported directly, and other plugins should be used. However, I'm wondering whether there's a simpler way to do this. I already have en.yml, so in theory I could just create en-AU.yml and en-US.yml which are effectively clones of en.yml but with a few regional changes applied. I could then add additional English - American and English - Australian options to our configuration which would map to the new region-specific locales and allow users to use a region-specific locale. The only problem I can think of with this is that it isn't DRY -- I would have duplicate translations for all common English words. I can't see a way around this. Are there any other disadvantages to this approach, or should I just bite the bullet and dive into one of the plug-ins such as Globalize2 instead?

    Read the article

  • Confusion on using django socialauth

    - by Fedor
    http://github.com/uswaretech/Django-Socialauth/tree/master/socialauth/ I'm a bit confused on how I should use this. Of course, I read the notes at the bottom but I'm a Django novice so I'll need a little hand holding. The structure of this looks like a project structure since it contains a urls.py but I'm also aware that applications can also have that. It also has a manage.py which leads me to believe it's a project ( plus the subdirectories ). So should I just be integrating portions of this into my existing project? This isn't an application, right? The README also mentions grabbing API Keys. So if I want a standard interface where you click on a google/yahoo logo and it forwards itself via Javascript to the authentication page where you login if you already aren't logged in, kicks you back to your own page, would I need API keys? Any other special tips are appreciated.

    Read the article

  • How to create "recurData" in Google Calendar?

    - by Pari
    I want to create recurring events of Calendar using Google API. I am following links: Google Calendar API I am not getting how to create "recurData". I can't modify String and pass it as parameter. Tried DDay.iCal Version 0.80. also. DDay.iCal There are some Example code given.I tried them. I am able to create ".ics" file. But when i pass this file content as "recurData" Getting Error : {"Execution of request failed: http://www.google.com/calendar/feeds/[email protected]/private/full?gsessionid=AHItK5wrSIoJVawFjGt-0g"} My icf File content is: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//DDay.iCal//NONSGML ddaysoftware.com//EN BEGIN:VEVENT CREATED:20100309T132930Z DESCRIPTION:The event description DTEND:20100310T020000 DTSTAMP:20100309T132930Z DTSTART:20100309T080000 LOCATION:Event location SEQUENCE:0 SUMMARY:18 hour event summary UID:396c6b22-277f-4496-bbe1-d3692dc1b223 END:VEVENT BEGIN:VEVENT CREATED:20100309T132930Z DTEND;VALUE=DATE:20100315 DTSTAMP:20100309T132930Z DTSTART;VALUE=DATE:20100314 SEQUENCE:0 SUMMARY:All-day event UID:ac25cdaf-4e95-49ad-a770-f04f3afc1a2f END:VEVENT END:VCALENDAR I made it using "Example6".

    Read the article

  • Cross-Page communication in firefox extension

    - by OzBarry
    I have two tabs that my extension uses and I wanted to pass events back and forth between them. I've already developed a Google Chrome extension that does this via the background page api, but there doesn't seem to be an equivalent in firefox. I thought message-manager in the firefox extension docs would do the trick, but the documentation on the object is quite poor. I'd be just as happy with using one of the tabs to control the other if I can't directly import the ideas of a background page from google chrome api. Any help/guidance would be great.

    Read the article

  • How Should I Print Documentation from Google Code?

    - by peter.newhook
    Google does a decent job of documenting their API (like Closure http://code.google.com/closure/compiler/docs/overview.html) but I find it hard to read because it's broken into such short pages. I like to leaf through my docs and read it on paper. Has anyone found a good way to print from the documentation on Google Code. It could be a PDF, or even just a long page with lots of content. Please note, I'm not talking about the Wikis in the Open Source side of Google Code. I'm referring to the API docs published by Google.

    Read the article

  • RESTful issue with data access when using HTTP DELETE method ...

    - by Wilhelm Murdoch
    I'm having an issue accessing raw request information from PHP when accessing a script using the HTTP DELETE directive. I'm using a JS front end which is accessing a script using Ajax. This script is actually part of a RESTful API which I am developing. The endpoint in this example is: http://api.site.com/session This endpoint is used to generate an authentication token which can be used for subsequent API requests. Using the GET method on this URL along with a modified version of HTTP Basic Authentication will provide an access token for the client. This token must then be included in all other interactions with the service until it expires. Once a token is generated, it is passed back to the client in a format specified by an 'Accept' header which the client sends the service; in this case 'application/json'. Upon success it responds with an HTTP 200 Ok status code. Upon failure, it throws an exception using the HTTP 401 Authorization Required code. Now, when you want to delete a session, or 'log out', you hit the same URL, but with the HTTP DELETE directive. To verify access to this endpoint, the client must prove they were previously authenticated by providing the token they want to terminate. If they are 'logged in', the token and session are terminated and the service should respond with the HTTP 204 No Content status code, otherwise, they are greeted with the 401 exception again. Now, the problem I'm having is with removing sessions. With the DELETE directive, using Ajax, I can't seem to access any parameters I've set once the request hits the service. In this case, I'm looking for the parameter entitled 'token'. I look at the raw request headers using Firebug and I notice the 'Content-Length' header changes with the size of the token being sent. This is telling me that this data is indeed being sent to the server. The question is, using PHP, how the hell to I access parameter information? It's not a POST or GET request, so I can't access it as you normally would in PHP. The parameters are within the content portion of the request. I've tried looking in $_SERVER, but that shows me limited amount of headers. I tried 'apache_request_headers()', which gives me more detailed information, but still, only for headers. I even tried 'file_get_contents('php://stdin');' and I get nothing. How can I access the content portion of a raw HTTP request? Sorry for the lengthy post, but I figured too much information is better than too little. :)

    Read the article

  • VB.net, disable proxy for entire program

    - by Brent
    Ever since upgrading to Visual Studio 2010, I'm running into an issue where the first web request of any type (WebRequest, WebClient, etc.) hangs for about 20 seconds before completing. Subsequent calls work quickly. I've narrowed down the problem to a proxy issue. If I manually disable proxy settings, I don't experience this delay: Dim wrq As WebRequest = WebRequest.Create(Url) wrq.Proxy = Nothing What's strange is that there are no proxy settings enabled on this machine in Internet Options. What I'm wondering is if there is a way to disable proxy settings for my entire project in one shot without explicitly disabling as above for every web object. The main reason I want to be able to do this is that I'm trying to use an API (http://code.google.com/p/google-api-for-dotnet/) which uses web requests, but does not provide any way to manually disable proxy settings. Can anyone point me in the right direction? Thanks!

    Read the article

  • Facebook dotnet app - not finding any user info

    - by Karen
    I am just starting to learn how to create a facebook app. I am using the MS dotnet libraries. Following this example and info: http://fbtutorial.qsh.eu/section1/fbml/step6.aspx I added some code in the default.aspx to print out the user name - but I am not getting any user information back. I understand that some user information is available even if the application is not authorized, such as name or user id. Example this code returns u=null: Facebook.Schema.user u = Master.Api.Users.GetInfo(); And this returns 0: this.Master.Api.Session.UserId Any suggestions as to why no information is getting returned? Thanks.

    Read the article

  • Auto scrolling or shifting a bitmap in .NET

    - by mikej
    I have a .NET GDI+ bitmap object (or if it makes the problem easier a WPF bitmap object) and what I want to to is shift the whole lot by dx,dy (whole pixels) and I would ideally like to do it using .NET but API calls are ok. It has to be efficient bacause its going to be called 10,000 times say with moderately large bitmaps. I have implemented a solution using DrawImage - but its slow and it halts the application for minutes while the GC cleans up the temp objects that have been used. I have also started to work on a version using ScrollDC but so far have had no luck getting it to work on the DC of the bitmap (I can make it work buy creating an API bitmap with bitmap handle, then creating a compatible DC asnd calling ScrollDC but then I have to put it back into the bitmap object). There has to be an "inplace" way of shifting a bitmap. mikej

    Read the article

  • how to autocomplete library class function in Xcode

    - by Bruce Ling
    Hi, I am new to Xcode. I am writing a C++ command line project. I used to use netbean. For example if I use a string, then I should be able to invoke the string library API function in a popup list for me to choose to autocomplete. So now I don't know how to do that in Xbox. Can some one help? My question is specific: how to find out the library function API? Thanks.

    Read the article

  • Probability distribution for sms answer delays

    - by Thomas Ahle
    I'm writing an app using sms as communication. I have chosen to subscribe to an sms-gateway, which provides me with an API for doing so. The API has functions for sending as well as pulling new messages. It does however not have any kind of push functionality. In order to do my queries most efficient, I'm seeking data on how long time people wait before they answer a text message - as a probability function. Extra info: The application is interactive (as can be), so I suppose the times will be pretty similar to real life human-human communication. I don't believe differences in personal style will play a big impact on the right times and frequencies to query, so average data should be fine.

    Read the article

  • What are the pros and cons of using an in memeory DB rather than a ThreadLocal

    - by Pangea
    we have been using ThreadLocal so far to carry some data so as to not clutter the API. However below are some of issues of using thread local that which I dont like 1) over the years the data items being carried in thread local has increased 2) Since we started using threads (for some light weight processing), we have also migrating these data to the threads in the pool and copying them back again I am thinking of using an in memory DB for these (we doesnt want to add this to the API). I wondering if this approach is good. What r the pros and cons. thx in advance.

    Read the article

  • Where to start with Direct2d?

    - by ShrimpCrackers
    Interested in learning Direct2d to create a Windows 8 app, but after 2 hours of research I'm thoroughly confused. Samples like this (Creating a Simple Direct2D Application) seem to assume you know what an HWND and HRESULT is, and how the Windows API works in general. My question is this: do you need an understanding of the Win API, COM, OLE, and all this other Windows stuff in order to get a good grasp on Direct2d/3d? All the other barebones tutorials assume that you know all this stuff and I don't really know where to start. The startup D2D project in VS 2012 gives you a bunch of files but there's no main or WinMain... How does this program even start?

    Read the article

  • trying to parse an xml from a url and it wont work.

    - by ida
    this page shown an xml file and im trying to use simplexml to parse the data out and print it. what am i missing? cause all it does is show a blank page when i run it. <?php $url = "http://api.scribd.com/api?method=docs.getList&api_key=2apz5npsqin3cjlbj0s6m"; $xml = new SimpleXMLElement($url,NULL,true); foreach($xml -> result as $value) { echo $value->doc_id."<br/>"; echo $value->access_key."<br/>"; echo $value->secret_password."<br/>"; echo $value->title."<br/>"; } ?>

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • Convert from Price

    - by Leroy Jenkins
    Im attempting to Convert a Price (from an API (code below)). public class Price { public Price(); public Price(double data); public Price(double data, int decimalPadding); } What I would like to do is compare the price from this API to a Double. Simply trying to convert to Double isnt working as I would have hoped. Double bar = 21.75; if (Convert.ToDouble(apiFoo.Value) >= bar) { //code } when I try something like this, I believe it says the value must be lower than infinity. How can I convert this price so they can be compared?

    Read the article

  • Can I inject a SessionBean into a JEE AroundInvoke-Interceptor?

    - by Michael Locher
    I have an EAR with modules: foo-api.jar foo-impl.jar interceptor.jar In foo-api there is: @Local FooService // (interface of a local stateless session bean) In foo-impl there is: @Stateless FooServiceImpl implements FooService //(implementation of the foo service) In interceptor.jar I want public class BazInterceptor { @EJB private FooService foo; @AroundInvoke public Object intercept( final InvocationContext i) throws Exception { // do someting with foo service return i.proceed(); } The question is: Will a Java EE 5 compliant application server (e.g. JBoss 5) inject into the interceptor? If no, what is good strategy for accessing the session bean? To consider: Deployment ordering / race conditions

    Read the article

  • Amazon like Ecommerce site and Recommendation system

    - by Hellnar
    Hello, I am planning to implement a basic recommendation system that uses Facebook Connect or similar social networking site API's to connect a users profile, based on tags do an analyze and by using the results, generate item recommendations on my e-commerce site(works similar to Amazon). I do believe I need to divide parts into such: Fetching social networking data via API's.(Indeed user allows this) Analyze these data and generate tokes. By using information tokens, do item recommendations on my e-commerce site. Ie: I am a fan of "The Strokes" band on my Facebook account, system analyze this and recommending me "The Strokes Live" CD. For any part(fetching data, doing recommendation based on tags...), what algorithm and method would you recommend/ is used ? Thanks

    Read the article

  • How can one connect to an RFCOMM device other than another phone in Android?

    - by Charles Duffy
    The Android API provides examples of using listenUsingRfcommWithServiceRecord() to set up a socket and createRfcommSocketToServiceRecord() to connect to that socket. I'm trying to connect to an embedded device with a BlueSMiRF Gold chip. My working Python code (using the PyBluez library), which I'd like to port to Android, is as follows: sock = bluetooth.BluetoothSocket(proto=bluetooth.RFCOMM) sock.connect((device_addr, 1)) return sock.makefile() ...so the service to connect to is simply defined as channel 1, without any SDP lookup. As the only documented mechanism I see in the Android API does SDP lookup of a UUID, I'm slightly at a loss.

    Read the article

  • What should a PHP generate to give back to a jQuery AJAX request?

    - by Alex Mcp
    Perhaps it's a syntax error, but I never assume that. I have a -dead- simple AJAX test set up: http://www.mcphersonindustries.com/bucket/api.php is a file with simply: <?php echo "test"; ?> And I have Apache as localhost with this jQuery bit running: $(document).ready(function() { function doAjaxPost() { $.ajax({ type: "POST", url: "http://www.mcphersonindustries.com/bucket/api.php", data: "null", success: function(resp){ console.log("Response: '" + resp + "'"); }, error: function(e){ console.log('Error: ' + e); } }); } doAjaxPost(); }); So Firebug spits out Response: '' each time, but nothing's coming through the request. Do I need to declare a header in PHP? Am I making a boneheaded mistake somewhere? Thanks for the insights, as always.

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >