Search Results

Search found 42923 results on 1717 pages for 'google search api'.

Page 52/1717 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • When to use HTTP status code 404 in an API

    - by Sybiam
    I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. More info, I forgot to add the urls that are fetched. Organizations /OrgTree/Get Goals /GoalTree/GetByDate?versionDate=... /GoalTree/GetById?versionId=... My mistake, both parameters are required. If any versionDate that can be parsed to a date is provided, it will return the closes revision. If you enter something in the past, it will return the first revision. If by Id with a id that doesn't exists, I suspect it's going to return an empty response with 200. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present. also I got linked this (one similar but I can't find it) http://viswaug.files.wordpress.com/2008/11/http-headers-status1.png

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

  • Using Google Maps API to get travel time data

    - by nibbo
    Hi! All the examples I've come across using google maps api seem to show a map of some kind. I would like to incorporate the data about the estimated travel time by car they give you when you ask for a road description from A to B into a site. And only that data. Is it possible without loading up a map for the end visitor? Thanks

    Read the article

  • Is there a codeigniter api reference?

    - by ajsie
    whenever using a framework it's so good with a api reference so you could lookup the classes' methods and properties, which class they extend from and so on. is there a api reference for codeigniter similar to yii's excellent api referenc? http://www.yiiframework.com/doc/api/ how else could you know more about the details about the classes if there isn't a good api reference. java's api reference is also great: http://java.sun.com/j2se/1.4.2/docs/api/ why can't i find codeigniter's. thanks.

    Read the article

  • How to index and search subversion repository

    - by Theo Briscoe
    I have access to a very large code base stored in a subversion repository. I would like to be able to perform Google type searched on the code base. I have done this before when I have access to the server by creating a network share and using Google desktop. I currently do not have access to this subversion server box. Any Ideas? Some more info The code base is company wide and large Don't want to download the entire code base for the entire business on my laptop My goal understand what code is available inside the company The code changes often Wondering if there are any tool that can search remote svn repositories?

    Read the article

  • How to decode Google spreadsheet's Json respose as a Php Array

    - by Mohammad
    My google Docs Spreadsheet call returns this response in the json format (I only need everything after "rows") please look at the formatted response here : ) I use php's json_decode function to parse the data and use it (Yes, I am awful at php) This code returns NULL, and according to the documentation, NULL is returned "if the json cannot be decoded". $json = file_get_contents($jsonurl); $json_output = json_decode($json); var_dump ($json_output); // Returns NULL Basically, what i want to accomplish is to make a simple array from the first row values of the Json response. like this $array = {'john','John Handcock','[email protected]','2929292','blanc'} You guys are genius, I would appreciate your insight and help on this very much! Answer as "sberry2A" mentions bellow, the response is not valid Json, google offers the Zend Json library for this purpose, tho I decided to parse the tsv-excel version instead :)

    Read the article

  • KML Layers Cursor CSS - Google Maps API v3

    - by Douglas
    I've run into a small problem with the semi-new KML Overlay functionality with Google Maps API v3, wherein while I am able to use "suppressInfoWindows: true;", the cursor still appears as though the overlay(s) are clickable. Is there a way at this time to change the css on the overlay(s) so that the cursor is the default cursor, so that they are purely visual, and don't confuse the user? The current demo version can be viewed here: http://dougglover.com/samples/finalProduct/ Any assistance is greatly appreciated.

    Read the article

  • Google Analytics API and Internal Search question

    - by Am
    I'm trying to use Google Analytics API to query internal searches that happen on my site. I'd like to be able to query the keywords and the number of times that keyword was used in internal search, based on URL of a page on the site. The idea is to find out which keywords direct the user to a particular page. Does anyone know which dimensions and metrics must use to query that information?

    Read the article

  • Cannot see my wordpress website on google search

    - by ion
    Hi guys I recently uploaded a site made with wordpress. The site url is oakabeachvolley.gr I have set on the privacy settings of wordpress for the site to be visible by search engines. However after almost 45 days the site is invisible on google even when I'm searching using the url name and very specific keywords. Since I have made quite a few sites with wordpress I have never seen this behavior before. Sites will eventually be visible to google engine, sometimes even in the first day. However in this case the site does not show nowhere in the first 20 pages. Any help would be greatly appreciated.

    Read the article

  • Google Maps rendering locally but not in live environment

    - by marcusstarnes
    I have a page that renders a simple google map for a specified location. This map renders without any problems at all when I run it locally on localhost, however, when I deploy this code to our live web servers (using our LIVE google API key for the appropriate domain) it fails to render, and upon putting a series of alerts within the javascript on the page, it appears that the 'Initialize' method (which should be called within body onLoad) is not being called. When I view the HTML source that is rendered on the live server it appears exactly as per the local version of the site (including the call to initialize() within the body onLoad event), albeit with the different maps API key. I have output the host (alert(window.location.host);) to ensure that the key I generated via the google maps api site, corresponds exactly to the live server, which it does. Does anyone have any ideas why it would be working locally but not when deployed to the live servers? The live site is hosted on 2 load-balanced web servers. This is the javascript that is rendered: <script src="http://maps.google.com/maps?file=api&amp;v=2&amp;sensor=false&amp;key=ABQIAAAA-BU8POZj19wRlTaKIXVM9xTz76xxk4yAELG9u79oXrhnLTB5NRRvAZ-bkKn1x8J68nfRTVOIWNPJEA" type="text/javascript"></script> <script type="text/javascript"> var map; var geocoder; alert(window.location.host); function initialize() { if (GBrowserIsCompatible()) { map = new GMap2(document.getElementById("businessMap")); map.setUIToDefault(); geocoder = new GClientGeocoder(); showAddress('St Margarets Street SW1P 3 London'); } } function showAddress(address) { geocoder.getLatLng( address, function(point) { if (!point) { // Address could not be located. jQuery('#googleMap').hide(); } else { map.setCenter(point, 13); var marker = new GMarker(point); map.addOverlay(marker); var html = 'Address info for the marker'; marker.openInfoWindow(html); GEvent.addListener(marker, "click", function() { marker.openInfoWindowHtml(html); }); } } ); } </script> Any help would be much appreciated. Thanks.

    Read the article

  • Google Streetview Images API

    - by jsight
    Is there an API available for grabbing Google Streetview images at a particular location (and direction)? I see that its possible to get and position the Flash control, but I'd prefer something that just gave me a JPEG (or some other bitmap format).

    Read the article

  • Problem with Google Calendar API "getTimes()" method

    - by Raffo
    I'm trying to get several informations from google calendar's API in JAVA. While trying to access to informations about the time of an event I got a problem. I do the following: CalendarService myService = new CalendarService("project_name"); myService.setUserCredentials("user", "pwd"); URL feedUrl = new URL("http://www.google.com/calendar/feeds/default/private/full"); CalendarQuery q = new CalendarQuery(feedUrl); CalendarEventFeed resultFeed = myService.query(q, CalendarEventFeed.class); for (int i = 0; i < resultFeed.getEntries().size(); i++) { CalendarEventEntry g = resultFeed.getEntries().get(i); List<When> t = g.getTimes(); //other operations } The list t obtained with getTimes() on a CalendarEventEntry is always empty and I can't figure out why. This seems strange to me since for a calendar event it should always exist a description of when it should happen... what am I missing??

    Read the article

  • How could Google Latitude find my exact PC location with no GPS or public wifi?

    - by Mike
    I found a similar question here but I still don't get it. You see, I live in a small town and every time I check my IP location via online services or speed test websites, my location appears to be my ISP server location (which in my case is 250 miles away). But when I tried Google latitude, it pinpointed my exact location within less than 100 meters! I use Windows Vista, Google Chrome, and when I got the message that "Google is trying to locate you", I agreed just to check what the result will be. It was scary, very scary! What I've come up after reading the above link is that Google have a kind of extensive WiFi database locations. That could be understandable with the case of public and open WiFis that are used with a lot of people. Some of them might be using applications that could gather location data and somehow this information ends up in giant Google databases. From those, Google could pinpoint a WiFi location based on its MAC address along with these bits of info that have been gathered via various sources. The issue here is that my WiFi is private, I don't even broadcast my WiFi name. So how on earth did Google find my exact PC location? Please break down the answer in layman's terms as possible.

    Read the article

  • Syncing Multiple Google Calendars and with Outlook and Android

    - by Fred Thomas
    Perhaps this is a multipart question, but I deal with a lot of calendars in my life, and want to know if there is some way to sync them all together, and maintain appropriate privacy. So I have a family calendar that my ex and I maintain for kid events, and I have a personal calendar for my own life, and I have an Outlook work calendar, for work. Ideally I'd look at my calendar on my Android phone. Is it possible to sync them all together? Is it possible for there to be one calendar to rule them all on my phone, but have the other calendars blank out spaces that are from other calendars, but only show the blanked out without the details. (I don't want my date with Miss Hottie to appear that way on the family calendar, and I probably don't want my visit to the proctologist to appear in the corporate exchange server.) Are there tools available to do this? Bonus question, can I do the same with my to do lists? Double bouns question -- how can I solve world hunger and help us to all live together in peace? :-)

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • Unable to Connect to just Google Servers

    - by Akshat Mittal
    I am in an extremely strange problem. I am unable to connect to just Google Servers. I am not able to access any site related to Google, Google.com | YouTube | Google+ | Webmaster Tools | jQuery CDN, nothing is working. I am able to open any other website (as I am posting this question on SuperUser), even the Google DNS (8.8.8.8 and 8.8.4.4) are offline. Please Help!! Update 1: Google DNS are back online, YouTube is back online. But website on domain google.com is not working (ex: play.google.com, maps.google.com, google.com/search, etc). Update 2: I am able to access Google.com (only) with (one of) its IP addresss(s) listed below: 74.125.227.41 74.125.227.46 74.125.227.32 74.125.227.33 74.125.227.34 74.125.227.35 74.125.227.36 74.125.227.37 74.125.227.38 74.125.227.39 74.125.227.40 Update 3: I consulted my friends nearby and they said that they are also experiencing the same problem. Seams like this is a major problem in this area (or India !!) The Problem is Now Solved!! I am able to open Google.com

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >