Search Results

Search found 76034 results on 3042 pages for 'google api php client'.

Page 39/3042 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Open Source Web-based CMS for writing and managing API documentation

    - by netcoder
    This is a question that have somewhat been asked before (i.e.: How to manage an open source project's documentation). However, my question is a little different because: We're not developing open source software, but proprietary software The documentation has to be hand-written, because we do not want to publish the actual software API documentation, but only the public API documentation I do want developers and project managers to write the documentation collaboratively Obviously, wikis are a solution, but they're very generic. I'm looking for a more specialized tool for this job. I've looked around and found a few like Adobe Robohelp, SaaS solutions and such, but I'd like to know if any open source software exists for that purpose. Do you know any Open Source Web-based CMS for writing and managing API and software documentation?

    Read the article

  • Translating error messages from an external API?

    - by Jan Fabry
    If I am localizing a piece of software that uses an external API, how should I handle error messages that originate in this API? I do not control the API, I only consume it. The error responses are not very structured: some contain error codes, some contain verbose details in the text, others almost nothing. Some errors can be fixed by the user (incorrect configuration), some are caused by the external service (server overload), some could be caused by a bug in my software (of course, this would be very unlikely...). I would like to provide a smooth experience to my end-users, so they know what went wrong and what they can do to fix it. What is the best strategy to use here? (This is a generalization of a question from the WordPress Stack Exchange. I thought it would be worth re-asking here, because it is not limited to WordPress plugins.)

    Read the article

  • Google authorship verification issue

    - by Fraser
    I'm trying to get my blog content author verified so my face gets into the Google search results. I managed to achieve this a few weeks back - When testing my content in the Google authorship testing tool it reported that I had been verified and I could see my mug in the results. All I had to do was wait a couple of weeks before I started popping up in the search results (I think(?)). However, I seem to have thrown a spanner in the works. I set up Google apps for my domain and merged my old Google+ profile into my google apps account. This seemed to reset my Google+ profile (no biggy, since it was a new profile and only had 1 connection). I re-set up my G+ account and tied it all in to my blog and it's content. I am now seeing some very strange behaviour. If you take a look at one of my blog posts through the snippet testing tool: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fblog.fraser-hart.co.uk%2Fjquery-fullscreen-background-slideshow%2F&html= You will see that it is not recognising me as an author. However, when you enter my profile URL (https://plus.google.com/108765138229426105004) into the "Authorship verification by email" input, you will see that it does in fact recognise it as verified. Now, if you try and verify the same page again, it reverts back to unverified. I thought I may have to just wait it out but this has been over a week now and previously (before I merged my profile) it happened instantaneously. Has anyone experienced this bizarre behaviour before? What is happening here? More importantly, is there anything I can do to resolve it? (Apologies for the long and boring question). Cheers!

    Read the article

  • Designing an API on top with Java RMI and Rest APIs

    - by user1303881
    I'm working on the backend of a java web application. We have a document repository (Fedora Commons specifically) where we house xml files. I want to abstract the API of the repository internally so that we aren't tightly coupled to one product. I'd also like to give the flexibility of connecting to to a repository via Java RMI or REST APIs. I was hoping to get advice or resources on how to implement something like this. My thought it that I'd have some abstract repository class that had methods like getRecord, updateRecord, and deleteRecord. In the constructor I would pass the URI for the repository and the API method and port. This would allow some flexibility in the future so that if the REST api became more practical, but allow the flexibility or using RMI which could (should?) have better performance. Am I over thinking this or am I on the right path?

    Read the article

  • Google ranking, page crawl

    - by Nawaf Mubarak
    please don't mind me for asking this newbie question about Google ranking. I know that in order to get ranked the page has to be crawled by Google bots, I have had a page example of which I will get a better understanding of how the system works with Google. I have made a page on my website last month, it got indexed pretty quickly, then I found that it's in Google's page 15 on my keyword as a start, next day it made it to page 13, then after a week it was jumping back and forth in page 17/18 up to 20. Now a month passed by, when and it isn't listed in any position of that 'keyword' sometimes I will find it in page 30, but later I won't find it anywhere, keep happening this way these days. Even if it isn't listed in any page for my keyword if I do a search for "site:thepageadress" it will be listed which means I'm not penalized and my page is there for google to see, but it isn't in the search result for my keyword. But when I write "site:thepage_adress" and I hit "search tools" option and click on "Past day" or "past week" it isn't listed, it is only listed when I click on "Past month" which I think means that Google indexed the page, looked at it once when I published it, and never looked at it again, is this a fair statement? So two questions that comes to mind here. 1- Should Google keep looking at a page even if I haven't changed any info for it? and is this an indication for me that my page is doing fine? or is it normal that Google see's it once and thats it? 2- Why and how to fix the fact that my page keeps jumping back and forth in the ranking result for keyword, and sometimes it isn't even listed, what does that mean? Sorry for the long msg, I hope to god that somebody help me with this. Thank you!

    Read the article

  • How to calculate square root in PHP [explained] [on hold]

    - by Enes Imsirovic
    At first code ! Don't forget embed the JQuery ! <html> <head> <title>Simple jQuery and PHP Square Root example</title> <script src="js/jquery-1.10.1.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $('#form').submit(function(){ var number = $('#number').val(); $.ajax({type:"post",url:"calculate.php",data:"number=" +number,success:function(msg){$('#result').hide(); $("#result").html("<h3>" + msg + "</h3>").fadeIn("slow"); } }); return false; }); }); </script> </head> <body> <form id="form" action="calculate.php" method="post"> Enter number: <input id="number" type="text" name="number" /> <input id="submit" type="submit" value="Calculate Square Root" name="submit"/> </form> <p id="result"></p> </body> </html> Second code witch would be connected with first : calculate.php <?php if($_POST['number']==null){ echo "Please Enter a Number"; }else { if (!is_numeric($_POST['number'])) { echo "Please enter only numbers"; }else{ echo "Square Root of " .$_POST['number'] ." is ".sqrt($_POST['number']); } } ?> Chiefly for begginers, to see the power of PHP :) xD Load this on your localhost.. PHP files and JS : https://mega.co.nz/#!Et8zWSBb!KX2PFxa2Pzw_l-wi6QU8xi_eKTlHbtQuBsT_DvXrifk At least it look like this : http://imgur.com/vNnDRQ3

    Read the article

  • Using Google Analytics to determine how much time a visitor spends in each section of my site

    - by flossfan
    I have a site with various pages, like: /about/history /about/team /contact/email-us /contact I want to figure out how much time people are spending on the entire /about section, and how much on the /contact section. If I run a query on the Google Analytics API and set the dimension to ga:pagePathLevel1 and the metric to ga:avgTimeOnPage, I get results like this: { pagePathLevel1: /about, avgTimeOnPage: 28 }, { pagePathLevel1: /contact, avgTimeOnPage: 10 } This looks roughly like what I want, but I'm not sure how to intepret it: Is the value of avgTimeOnPage the average time spent by any user on all pages that match that path? Or is it the average time spent by any user on any single page that matches that path? I'm looking for the average time spent across all pages matching that path, but the time estimates look shorter than I'd expect.

    Read the article

  • Filtering attachment types in Google Apps (free Google Business)

    - by Ernest
    We have Google apps in our company for mail delivery, our business can't pay the business version yet, however, we need to control the attachment types that employees download. We recently switched from another hosting provider who recomended us to plug Google Apps for mail when we moved the domain, we had a firewall before which was able to prevent certain file types to be downloaded. I know the business version has section for filtering mail (postini services). Is there a hack around my problem? Anyone ever had this problem? Thank you! UPDATE: The main problem is gmail apps uses ssl connection, can this be changed ? how can i get the firewall to filter files only allowing *.doc, *.xls y *.pdf.

    Read the article

  • Google Chrome doesn't stay logged in to Google sites when using pinned tabs

    - by Nick T
    Despite checking "stay logged in" or the like on Gmail or Docs, Chrome refuses to do so when I close and re-open it with Google sites pinned. If they're not pinned, it works fine. The "Clear cookies and other site and plug-in data when I close my browser" checkbox in the settings is not checked, and I don't have any cookie exceptions. All settings are defaults. Nor is the incognito mode being used. This occurs on all my computers using Chrome. I have deleted my cookies file (%userprofile%\AppData\Local\Google\Chrome\User Data\Default\Cookies) with no effect (other than losing the logins that ordinarly work fine). Of note is that when I relaunch Chrome with Gmail pinned and it asks me to log in, doing so once will fail (does nothing; no errors), then it will work on the second attempt. If I refresh the window before doing so, it will work on the first attempt.

    Read the article

  • Where to place php libraries/extensions?

    - by gdaniel
    I am new to a lot of server configurations and options. I want to add extra php libraries/extensions to my server. Where do I add them? (I am on a CENTOS 6.5 VPS) For example, I want to add the phpseclib php extension: Their website instructions are minimum: Usage This library is written using the same conventions that libraries in the PHP Extension and Application Repository (PEAR) used to be written in (current requirements break PHP4 compatibility). In particular, this library needs to be in your include_path: <?php set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib'); include('Net/SSH2.php'); ?> It tells me how to use it, but it doesn't tell me where to add the actual extension files. Should I added it under? usr/local/lib ? usr/local/lib/php ? usr/local/lib/php/pear ? Or can I add it under public_html? Also, my VPS has several users under /home/.. is that away to make the library available for only one user?

    Read the article

  • Client Side Prediction for a Look Vector

    - by Mike Sawayda
    So I am making a first person networked shooter. I am working on client-side prediction where I am predicting player position and look vectors client-side based on input messages received from the server. Right now I am only worried about the look vectors though. I am receiving the correct look vector from the server about 20 times per second and I am checking that against the look vector that I have client side. I want to interpolate the clients look vector towards the correct one that is server side over a period of time. Therefore no matter how far you are away from the servers look vector you will interpolate to it over the same amount of time. Ex. if you were 10 degrees off it would take the same amount of time as if you were 2 degrees off to be correctly lined up with the server copy. My code looks something like this but the problem is that the amount that you are changing the clients copy gets infinitesimally small so you will actually never reach the servers copy. This is because I am always calculating the difference and only moving by a percentage of that every frame. Does anyone have any suggestions on how to interpolate towards the servers copy correctly? if(rotationDiffY > ClientSideAttributes::minRotation) { if(serverRotY > clientRotY) { playerObjects[i]->collisionObject->rotation.y += (rotationDiffY * deltaTime); } else { playerObjects[i]->collisionObject->rotation.y -= (rotationDiffY deltaTime); } }

    Read the article

  • Google Currency Convertor JSON API

    - by Gopinath
    There are many live currency conversion services available on the web and the popular one’s among them are – Google, Yahoo, MSN & XE. Among all these four Google is the developer’s darling and it provides a simple JSON API that can be integrated in your applications.  http://www.google.com/ig/calculator?hl=en&q=1USD=?INR Using the API is very simple and it takes two parameters as input. The first parameter “hl” is the language code in which you want output. The second parameter “q” is the conversion query in the format <number><from currency code>=?<to currency code>. In the URL give above the query requests for conversion of 1 USD in INR. JSON output for the above query would be  similar to {lhs: "1 U.S. dollar",rhs: "54.4602984 Indian rupees",error: "",icc: true} Examples: 100 USD in INR  http://www.google.com/ig/calculator?hl=en&q=100USD=?INR Example 2: 1 GBP in INR http://www.google.com/ig/calculator?hl=en&q=1GBP=?INR Example 3: 1 USD in INR, output the data in French language http://www.google.com/ig/calculator?hl=fr&q=1USD=?INR   This is an undocumented service and expect changes at any time. But as long as it works, you got a programmatic way to convert currencies.

    Read the article

  • Google Analytics Not tracking data correctly IP-address issue?

    - by PaperThick
    I have developed a small site for a client and the site has been placed inside a <iframe> at the clients site. The GA-script I'm using looks like this: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXXXXXX-2'], //My company's GA-account ['_trackPageview'], ['b._setAccount', 'UA-XXXXXXXX-1'], // Test GA-account ['b._trackPageview'], ['th._setAccount', 'UA-XXXXXXX-3'], ['th._setDomainName', '.clientdomain.se'], // Client GA-account ['th._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </head> As you can see I report the GA pageviews to the client as well. The GA script is tracking visitors and pageviews at both ends. But the problem is that on my clients side the visitor-count is more than double what they are on my end (20 000 vs 5 000). At first I thought that it was being duplicated at some point but when I checked my Crazy-Egg account I saw that it had tracked over 10 000 visits and then stopped tracking because that was my limit on the account. The page my site is on is on a IP-address (http://XXX.XXX.XX.X/campaign/) and not on a "valid url". Could that be an issue why some of the visitors isn't beeing tracked? Thanks in advance

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • Uploading multiple images through Tumblr API

    - by Joseph Carrington
    I have about 300 images I want to upload to my new Tumblr account, becuase my old wordpress site got hacked and I no longer wish to use wordpress. I uploaded one image a day for 300 days, and I'd like to be able to take these images and upload them to my tumblr site using the api. The images are currently local, stored in /images/. They all have the date they were uploaded as the first ten characters of the filename, (01-01-2009-filename.png) and I went to send this date parameter along as well. I want to be able to see the progress of the script by outputting the responses from the API to my error_log. Here is what I have so far, based on the tumblr api page. // Authorization info $tumblr_email = '[email protected]'; $tumblr_password = 'password'; // Tumblr script parameters $source_directory = "images/"; // For each file, assign the file to a pointer here's the first stumbling block. How do I get all of the images in the directory and loop through them? Once I have a for or while loop set up I assume this is the next step $post_data = fopen(dir(__FILE__) . $source_directory . $current_image, 'r'); $post_date = substr($current_image, 0, 10); // Data for new record $post_type = 'photo'; // Prepare POST request $request_data = http_build_query( array( 'email' => $tumblr_email, 'password' => $tumblr_password, 'type' => $post_type, 'data' => $post_data, 'date' => $post_date, 'generator' => 'Multi-file uploader' ) ); // Send the POST request (with cURL) $c = curl_init('http://www.tumblr.com/api/write'); curl_setopt($c, CURLOPT_POST, true); curl_setopt($c, CURLOPT_POSTFIELDS, $request_data); curl_setopt($c, CURLOPT_RETURNTRANSFER, true); $result = curl_exec($c); $status = curl_getinfo($c, CURLINFO_HTTP_CODE); curl_close($c); // Output response to error_log error_log($result); So, I'm stuck on how to use PHP to read a file directory, loop through each of the files, and do things to the name / with the file itself. I also need to know how to set the data parameter, as in choosing multi-part / formdata. I also don't know anything about cURL.

    Read the article

  • Mantis Bug tracker API integration?

    - by Industrial
    Hi everybody, I have just installed the Mantis bug tracker to use together with Eclipse IDE and have started too found out the advantages of it. Really great. Since Eclipse communicates with Mantis through an PHP soap API, I wonder if there's some documentation available on how I can myself make calls to the API to add new bugs and get statuses of existing ones. Thanks a lot!

    Read the article

  • Choose Graph API or old REST API for Facebook application

    - by Andree
    Hi there! I should have asked this in Facebook developer forum instead, but somehow I can't register to the forum and the Facebook connect feature is not working at the time I'm writing this. Anyway, I am still confused whether to use Graph API or the old REST API for my Facebook app. Generally, this is what I want to achieve in my app: Get profile picture and name of the user. Get profile picture and name of the user's friends who are also using my app. Post into the user's stream. Allow users to invite their friends to use the application. Can someone provide me an insight, which one is better for my application?

    Read the article

  • google mobile tracking api

    - by Bharanikumar
    Hi , We are doing call taxi business , I have around 100 drivers , I want to track these 100 drivers , am not ready to go for GPRS and any other costly works , So i want to track these guys using His mobile, That is, I have all these drivers mobile number, is there a way to find his/her present location using any google API , Please tell me Google API for tracking mobile , Thanks Bharanikumar

    Read the article

  • Closing InfoWindow with Google Maps API V3

    - by Oscar Godson
    I've seen the other posts, but they dont have the markers being looped through dynamically like mine. How do I create an event that will close the infowindow when another marker is clicked on using the following code? $(function(){ var latlng = new google.maps.LatLng(45.522015,-122.683811); var settings = { zoom: 10, center: latlng, disableDefaultUI:false, mapTypeId: google.maps.MapTypeId.SATELLITE }; var map = new google.maps.Map(document.getElementById("map_canvas"), settings); $.getJSON('api',function(json){ for (var property in json) { if (json.hasOwnProperty(property)) { var json_data = json[property]; var the_marker = new google.maps.Marker({ title:json_data.item.headline, map:map, clickable:true, position:new google.maps.LatLng( parseFloat(json_data.item.geoarray[0].latitude), parseFloat(json_data.item.geoarray[0].longitude) ) }); function buildHandler(map, marker, content) { return function() { var infowindow = new google.maps.InfoWindow({ content: '<div class="marker"><h1>'+content.headline+'</h1><p>'+content.full_content+'</p></div>' }); infowindow.open(map, marker); }; } new google.maps.event.addListener(the_marker, 'click',buildHandler(map, the_marker, {'headline':json_data.item.headline,'full_content':json_data.item.full_content})); } } }); });

    Read the article

  • Google Maps InfoBubble pixelOffset

    - by Sam
    I am trying to implement a custom infoBubble that has the box opening to the side of a marker. This has turned out to be harder than expected. Using the normal infoWindow you can use pixelOffset. See here for the documentation Using infoBubble this does not seem to be the case. Is there anyway of using pixelOffset in an infoBubble, or something that will do the same thing? I have found this very difficult to search for, as using a google search such as this returns no relevant results Google Search Below is all my resources I have been using. Example of infoBubble here. My JavaScript to setup the map and infoBubble here. And now my javascript here just in-case the jsfiddle link is broken. <script type="text/javascript"> $(document).ready(function () { init(); }); function init() { //Setup the map var googleMapOptions = { center: new google.maps.LatLng(53.5167, -1.1333), zoom: 13, mapTypeId: google.maps.MapTypeId.ROADMAP }; //Start the map var map = new google.maps.Map(document.getElementById("map_canvas"), googleMapOptions); var marker = new google.maps.Marker({ position: new google.maps.LatLng(53.5267, -1.1333), title: "Just a test" }); marker.setMap(map); infoBubble = new InfoBubble({ map: map, content: '<div class="phoneytext">Some label</div>', //position: new google.maps.LatLng(-35, 151), shadowStyle: 1, padding: '10px', //backgroundColor: 'rgb(57,57,57)', borderRadius: 5, minWidth: 200, arrowSize: 10, borderWidth: 1, borderColor: '#2c2c2c', disableAutoPan: true, hideCloseButton: false, arrowPosition: 7, backgroundClassName: 'phoney', pixelOffset: new google.maps.Size(130, 120), arrowStyle: 2 }); infoBubble.open(map, marker); } </script>

    Read the article

  • Google local search API - callback never called

    - by Laurent Luce
    Hello, I am trying to retrieve some local restaurants using the LocalSearch Google API. I am initializing the search objects in the OnLoad function and I am calling the searchControl execute function when the user clicks on a search button. The problem is that my function attached to setSearchCompleteCallback never get called. Please let me know what I am doing wrong here. <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('maps' , '2'); google.load('search' , '1'); var searcher, searchControl; function OnLoad() { searchControl = new google.search.SearchControl(); searcher = new google.search.LocalSearch(); // create the object // Add the searcher to the SearchControl searchControl.addSearcher(searcher); searchControl.setSearchCompleteCallback(searcher , function() { var results = searcher.results; // Grab the results array // We loop through to get the points for (var i = 0; i < results.length; i++) { var result = results[i]; } }); $('#user_search_address').live('click', function() { g_searchControl.execute('pizza san francisco'); }); } google.setOnLoadCallback(OnLoad); </script>

    Read the article

  • Facebook API, export group wall entries

    - by Guy David
    After trying to find a reference to an API/tutorial to such thing, I have come here. I would like to scan a specific group wall, pulling all posts from it, with PHP or C#. In the end, I would like to have a nested-array containing each posts, with the next details: An array of the related comments Likes Views Obviously, I don't ask for any code, only a reference to the related API I will need to use. EDIT: If this is not possible, should I consider cURL as an option?

    Read the article

  • Versioning APIs

    - by Sharon
    Suppose that you have a large project supported by an API base. The project also ships a public API that end(ish) users can use. Sometimes you need to make changes to the API base that supports your project. For example, you need to add a feature that needs an API change, a new method, or requires altering of one of the objects, or the format of one of those objects, passed to or from the API. Assuming that you are also using these objects in your public API, the public objects will also change any time you do this, which is undesirable as your clients may rely on the API objects remaining identical for their parsing code to work. (cough C++ WSDL clients...) So one potential solution is to version the API. But when we say "version" the API, it sounds like this also must mean to version the API objects as well as well as providing duplicate method calls for each changed method signature. So I would then have a plain old clr object for each version of my api, which again seems undesirable. And even if I do this, I surely won't be building each object from scratch as that would end up with vast amounts of duplicated code. Rather, the API is likely to extend the private objects we are using for our base API, but then we run into the same problem because added properties would also be available in the public API when they are not supposed to be. So what is some sanity that is usually applied to this situation? I know many public services such as Git for Windows maintains a versioned API, but I'm having trouble imagining an architecture that supports this without vast amounts of duplicate code covering the various versioned methods and input/output objects. I'm aware that processes such as semantic versioning attempt to put some sanity on when public API breaks should occur. The problem is more that it seems like many or most changes require breaking the public API if the objects aren't more separated, but I don't see a good way to do that without duplicating code.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >